Tag Archives: enterprise software

TDWI Webinar Review: Putting Machine Learning to Work in Your Enterprise

It’s been a while since I watched a webinar, but since business intelligence (BI) and (AI) are overlapping areas of interest, I watched Tuesday’s TDWI webinar on Machine Learning (ML). As the definition of machine learning expands out of the pure AI because of BI’s advanced analytics, it’s interesting to see where people are going with the subject.

The host was Fern Halper, VP Research, at TDWI. The guests were:

  • Mike Gualtieri, VP, Forrester,
  • Askhok Swaminathan, Senior Director, Product Management, SAP,
  • Chandran Saravana, Senior Director, Advanced Analytics, SAP.

Ms. Halper began with a short presentation including her definition of ML as “Enabling computers to learn patterns with minimal human intervention.” It’s a bit different than the last time I reviewed one of her webinars, but that’s ok because my definition is also evolving. I’ve decided to use my own definition, “Technology that can investigate data in an environment of uncertainty and make decisions or provide predictions that inform the actions of the machine or people in that environment.” Note that I differ from my past, purist, view, of the machine learning and adjusting algorithms. I’ve done so because we have to adapt to the market. As BI analytics have advanced to provide great insight in data discovery, predictive analytics and more, many areas of BI and the purist area of learning have overlapped. Learning patterns can happen through pure statistical analysis and through self-adaptive algorithms in AI based machines.

The most interesting part of Fern Halper’s segment was a great chart showing the results of a survey asking about the importance of different business drivers behind ML initiatives. What makes the chart interesting, as you can see, is that it splits results between those groups investigating ML and those who are actively using it.

What her research shows is that while the highest segments for the active categories are customer related, once companies have seen the benefits of ML, the advantages of it for almost all the other areas jump significantly over views held during the investigation phase.

A panel discussion then proceeded, with Ms. Halper asking what sounded like pre-discussed questions (especially considering the included and relevant slides) to the three panelists. The statements by the two SAP folks weren’t bad, they were just very standard and lacked any strong differentiators. SAP is clearly building an architecture to leverage ML using their environment, but there weren’t case studies and I felt the integration between the SAP pieces didn’t bubble up to the business level.

The reason to listen to this segment is Mr. Gualtieri. He was very clear and focused on his message. While I quibble with some of the things he said about data scientists, that soap box isn’t for here. He gave a nice overview of the evolving state of ML for enterprise. The most important part of that might have been missed by folks, so I’ll bring it up here.

Yes, TensorFlow, R, Python and other tools provide frameworks for machine learning implementations, but they’re still at a very technical level. They aren’t for business analysts and management. He mentioned that the next generation of tools are starting to arrive, one that, just like the advent of BI, will allow people with less technical experience to more quickly use models in and gain insights from machine learning.

That’s how new technology grows, and I’d like to see TDWI focus on some of the new tools.

Summary

This was a good webinar, worth the time for those of you who are interested in a basic discussion of where machine learning is within the enterprise landscape.

What Makes Business Intelligence “Enterprise”?

I have an article in the Spring TDWI Journal. It has now been six months and the organization has been kind enough to provide me with a copy of my article to use on my site: TDWI_BIJV21N1_Teich.

If you like my article, and I know you will, check out the full journal.

 

Semantics and big data: Thought leadership done right

Dataversity hosted a webinar by Matt Allen, Product Marketing Manager at MarkLogic. Mr. Allen’s purpose was to explain to the audience the basic challenges involved in big data which can be addressed by semantic analysis. He did a good job. Too many people attempting the same spend too much time on their own product. Matt didn’t do so. Sure, when he did he had some of the same issues that many in our industry have, of over selling change; but the corporate references were minimal and the first half of the presentation was almost all basic theory and practice.

Semantics and Complexity

On a related tangent, one of the books I’m reading now is Stanley McChrystal’s “Team of Teams.” In it, he and his co-authors point to a distinction between complicated and complex. A manufacturing process can be complicated, it can have lots of steps but have a clearly delineated flow. Complex problems have many-to-many relations which aren’t set and can be very difficult to understand.

That ties clearly into the message put forward by MarkLogic. The massive amount of unstructured data is complex, with text rather than fields and which need ways of understanding potential meaning. The problems in free text are such things as:

  • Different words can define the same thing.
  • The same word can mean different things in different contexts.
  • Depending on the person, different aspects of information are needed for the same object.

One great example that can contain all for issues was given when Matt talked about the development process. At different steps in the process, from discovery, to different development stages to product launch, there’s a lot of complexity in meanings of terms not only in development organizations but between them and all the groups in the organization with whom they have to work.

Mr. Allen then moved from discussing that complexity to talking about semantic engines. MarkLogic’s NoSQL engine has a clear market focus on semantic logic, but during this section he did well to minimize the corporate pitch and only talked about triples.

No, not baseball. Triples are a syntactical tool to link subject (person), predicate (operates), object (machine). By building those relationship, objects can be linked in a less formal and more dynamic manner. MarkLogic’s data organization is based on triples. Matt showed examples of JSON, Turtle and XML representations of triples, very neatly sliding his company’s abilities into the theory presentation – a great example of how to mention your company while giving a thought leadership presentation without being heavy handed.

Semantics, Databases and the Company

The final part of the presentation was about the database structure needed to handle semantic analytics. This is where he overlapped the theory with a stronger corporate pitch.

Without referring to a source, Mr. Allen stated that relation databases (RDBMS’) can only handle 20% of today’s data. While it’s clear that a lot of the new information is better handled in Hadoop and less structured data sources, it’s a question of performance. I’d prefer to see a focus on that.

Another error often made by folks adopting new technologies was the statement that “Relational databases aren’t solving a lot of today’s problems. That’s why people are moving to other technologies.” No, they’re extending today’s technologies with less structured databases. The RDBMS isn’t going away, as it does have its purpose. The all or nothing message creates a barrier to enterprise adoption.

The final issue is the absolutist view of companies that think they have no competitor. Mark Allen mentioned that MarkLogic is the only enterprise database using triples. That might be literally true. I’m not sure, but so what? First, triples aren’t a new concept and object oriented databases have been managing triples for decades to do semantic analysis. Second, I recently blogged about Teradata Aster and that company’s semantic analytics. While they might not use the exact same technology, they’re certainly a competitor.

Summary

Mark Allen did a very good job exposing people to why semantic analysis matters for business and then covered some of the key concepts in the arena. For folks interested in the basics to understand how the concept can help them, watch the replay or talk with folks at MarkLogic.

The only hole in the presentation is that though the high level position setting was done well, the end where MarkLogic was discussed in detail had some of the same problems I’ve seen in other smaller, still technology driven companies.

If Mr. Allen simplifies the corporate message, the necessary addition at the end of the presentation will flow better. However, that doesn’t take away from the fact that the high level overview of semantic analysis was done very well, discussing not only the concepts but also a number of real world examples from different industries to bring those concepts alive for the audience. Well done.

Looker at the BBBT: A New Look at SQL Performance

The most recent BBBT presentation was by Looker. Lloyd Tabb, Founder & CTO, and Zach Taylor, Product Marketing Manager, showed up to display yet another young company’s interesting technology.

Looker’s technology is an application server that sits above relational databases to provide faster, more complex queries. They’ve developed their own language, LookML to help with that. That’s no surprise, as Lloyd is a self-described language guy.

It’s also no surprise that the demos, driven by both Lloyd and Zach, were very coding heavy. Part of the reason that very technical focus exists is, as Mr. Tabb stated, that Looker thinks there are two groups of users: Coders who build models and business managers who use the information. There is no room in that model for the business analyst, the person who understands who to communicate a complex business need to the coders and how to help the coders deliver something that is accessible to and understandable by the information consumer.

How the bifurcation was played out in the demonstration was through an almost exclusive focus on code, code and more code, with a brief display of some visualization technology. The former was very good while the later wasn’t bad but, to fit with their mainly technology focus, had complex visualizations without good enough legends – they were visualizations that would be understood by technical people but need to be better explained for the business audience they claim to address.

As an early stage company, that’s ok. The business intelligence (BI) market is still young and very fragmented. You can get different groups in large companies using different BI tools. While Looker talks about 300 customers, as with most companies of their size it could only be those small groups. If they’re going to grow past those groups, they need to focus a bit more in how to better bridge technology and business.

They also have a good start in attracting the larger market because they support both cloud and on-premises systems. The former market is growing while the later isn’t going away. Providing the ability for their server to run either place will address the needs of companies on either side of the divide.

RDMS ≠ SQL

One key to their system is they don’t move data. It stays resident on the source systems. Those could be operational systems, data warehouses, an ODS or whatever. What they must have is SQL. When asked about Hadoop and other schema-on-write systems, the Looker team stated they are an RDMS based application but they’ll work on anything with SQL access. I have no problem with the technology, but they need to be very clear about the split.

SQL came from the relational world, but as they pointed out in an aside, it isn’t limited to that. They should drop the RDMS message and focus on SQL. As Lloyd Tabb said, “SQL is the right abstraction.” What I don’t know if he understands, being focused on technology and having those biases, is it isn’t the right abstraction because of some technical advantage but because it’s the major player. McDonalds isn’t the best burger because it has the most stores. SQL might not be the best access method, but it’s the one business knows and so it’s the one the newer database companies and structures can’t ignore.

Last year, the BBBT heard from multiple companies including Actian and EXASOL, companies focused on providing SQL access to Hadoop. That’s as important as what Looker is doing. The company that manages to do both well with jump ahead of the pack.

Summary

Looker is a good, young company with some technical advantages that can greatly improve the performance of SQL queries to business databases and provides a basic BI front end to display the results. I’m not sure they have the resources to focus on both, and I think the former have the clearest advantage in the marketplace. Unless they have more funding and a strong management team that can begin to better understand the business side of the market, they will have problems addressing the visualization side of BI. They need to keep improving their engine, spread it to access more data sources, and partner with visualization companies to provide the front end.

Silwood at BBBT: Understand Packaged Software Metadata

Tuesday saw a rare, mid-week presentation at the BBBT. Silwood Technology, an Ascot, UK, company sent people to Boulder to present their technology. Roland Bullivant, Sales and Marketing Director, and Nick Porter, Technical Director (and a co-founder) were the presenters.

Silwood Safyr is focused on helping IT understand the metadata in their major packaged enterprise systems, primarily from SAP and Oracle with a recent addition of Salesforce. As those familiar with the enterprise application space know, there are a lot of tables in SAP and Oracle and documentation has never been, shall we say, close to perfect. In addition, all customers of those systems customize the applications, thereby making the metadata more difficult to understand. Safyr does a very good job at finding the technical metadata.

Let me make that clear: Technical metadata. The tables, indices and their relations are what is found. That’s extremely valuable, but not the full picture. Business metadata is not managed. I’ll discuss that in more detail below.

The company, as expected from European companies, uses partners rather than direct sales for its primary sales channel. In addition, they OEM white label products through IBM, CA and other firms. All told, Roland Bullivant says that 70% of their customers are via reseller channels. Also as expected, they still remain backline support for those partners.

Metadata Matters

As mentioned above, Safyr captures the database structure metadata. As Roland so succinctly put it, “The older packages weren’t really built with the outside world in mind.” The internal structures aren’t pretty and often aren’t easily accessible. However, that’s not the only difficulty in understanding an enterprise’s data structures.

Salesforce has a much simpler data structure, intentionally created to open the information to the ecosystem of partner applications that then grew up around the application. Still, as Mr. Bullivant pointed out, there are companies in Europe that have 16 or more customized versions in different countries or divisions, so understanding and meshing those disparate systems in order to build a full enterprise data model isn’t easy. That’s where Safyr helps.

But What Metadata?

Silwood Safyr is a great leap forward from having nothing, but there’s still much missing. While they build a data model, there’s not enough intelligence. For instance, they leave it to their users to figure out which tables are production and which are duplicates or other tables used just for performance. Sure, a table with zero rows usually means either a performance table or an unlocked app segment, but that’s left for the user rather than flagging, filtering and indicating any knowledge of the application and data structures.

Also, as mentioned above, there’s no business intelligence (gosh, where’d that word come from?). There’s nothing that lets people understand the business logic of the applications. That’s why this is a pure IT tool. The structures are just described in technical terms, exported to data modeling tools (a requirement for visualization, ERwin was used in the demo but they work with others ) and then left to the analysts to identify all the information need to clarify which tables are needed for which business purpose or customer.

One way to start working on that was indicated in Nick Porter’s demo. He showed that Safyr is good at not just getting table names, but also in accessing descriptive names and other metadata about the tables. That’s information needs to be leveraged to help prepare the results for use by people on the business side of the organization.

Where to Go From Here?

The main hole I see in the business links from the last section: The lack of emphasis on business knowledge. For instance, there’s a comparison function to analyze metadata between databases. However, as it’s purely on a technical level, it’s limited to comparing SAP with SAP and Oracle with Oracle. Given that differences in versions of those products can be significant, I’m not even sure how well that works across major version releases.

Not only do global enterprises have multiple versions of one vendor, they have SAP on one continent, Oracle in another and might acquire a new company that is using Salesforce. That lack of an ability to link business layers means that each package is working in a void and there’s still a lot of work required to build a coherent global picture.

Another part of their growth need is my usual soapbox. When the Silwood team was talking about how they couldn’t figure out why they weren’t growing as fast as they should, Claudia Imhoff beat me to the punch. She mentioned marketing. They’d earlier pointed out they don’t spend much on marketing and she quickly pointed out that’s a problem. This isn’t Field of Dreams, they won’t come just because you build it. Silwood marketing basics are good, with a lack of visible case studies being one hole, but they’re not pushing their message out through the channels.

Summary

Silwood Safyr is a good core product to help IT automate the documentation of data models in packaged enterprise software. It’s a product that should be of interest to every large enterprise using complex applications such as those by Oracle and SAP, or even multiple versions of simple databases such as Salesforce. However, there are two things missing.

The most important missing piece in the short term is the marketing necessary to help their resellers better understand benefits both they and the end customer receive, to improve interest in reselling and to shorten sales cycles.

The second is to look long term at where they can grow the business. My suggestion is to better work with business logic within and across applications vendors. That’s the key way they’ll defend their turf against the BI vendors who are slowly moving downstream to more technical data access.

The reason people want to understand data models isn’t out of curiosity, it’s to better understand business. Silwood has a great start in aiding enterprises in improving that understanding.

Revolution Analytics at BBBT: Vision and products for R need to mesh

Revolution Analytics presented to the BBBT last Friday. The company is focused on R with a stated corporate vision of “R: The De-facto standard for enterprise predictive analytics .” Bill Jacobs, VP, Product Marketing, did most of the talking while Steve Belcher, Sales Engineer, gave a presentation.

For those of you unfamiliar with R as anything other than a letter smack between Q and S, R is an open source programming language for statistics and analytics. The Wikipedia article on R points out it’s a combination of Scheme and S. As someone who programmed in Scheme many years ago, the code fragments I saw didn’t look like it but I did smile at the evolution. At the same time, the first thing I said when I saw Revolution’s interactive development environment (IDE) was that it reminded me of EMACS, only slightly more advanced in thirty years. The same wiki page referenced earlier also said that R is a GNU project, so now I know why.

Bill Jacobs was yet another vendor presenter who has mentioned his company realized that the growth of the Internet of Things (IOT) means a data explosion that leaves what is currently misnamed as big data in the dust as far as data volumes. He says Revolution wants to ensure that companies are able to effectively analyze IOT and other information and that his company’s R is the way to do so.

Revolution Analytics is following in the footsteps of many companies which have commercialized freeware over the years, including Sun with Unix and Red Hat with Linux. Open source software has some advantages, but corporate IT and business users require services including support, maintenance, training and more. Companies which can address those needs can build a strong business and Revolution is trying to do so with R.

GUI As Indicative Of Other Issues

I mentioned the GUI earlier. It is very simple and still aimed at very technical users, people doing heavy programming and who understand detailed statistics. I asked why and was told that they felt that was their audience. However, Bill had earlier talked about analytics moving forward from the data priests to business analysts and end users. That’s a dichotomy. The expressed movement is a reason for their vision and mission, but their product doesn’t seem to support that mission.

Even worse was the response when I pointed out that I’d worked on the Apple Macintosh before and after MPW was released and had worked at Gupta when it was the first 4GL language on the Windows platform. I received as long winded answer as to why going to a better and easier to use GUI wasn’t in the plans. Then Mr. Jacobs mentioned something to the effect of “You mentioned companies earlier and they don’t exist anymore.” Well, let’s forget for a minute that Gupta began a market, others such as Powersoft did well too for years, and then Microsoft came out with its Visual products to control the market but that there were many good years for other firms and the products are still there. Let’s focus on wondering when Apple ceased to exist.

It’s one thing to talk about a bigger market message in the higher points of a business presentation. It’s another, very different, thing to ensure that your vision runs through the entire company and product offering.

Along with the Vision mentioned above, Revolution Analytics presents a corporate mission to “Drive enterprise adoption of R by providing enhanced R products tailored to meet enterprise challenges.” Enterprise adoption will be hindered until the products reflect an ability to work for more than specialist programmers but can address a wider enterprise audience.

Part of the problem seems to be shown in the graphic below.

Revolution Analytics tech view of today

Revolution deserves credit for accurately representing the current BI space in snapshot. The problem is that it is a snapshot of today and there wasn’t an indication that the company understands how rapidly things change. Five to ten years ago, the middle column was the left column. Even today there’s a very technical need for the people who link the data to those products in order to begin analysis. In the same way, much of what is in the right column was in the middle. In only a few years, the left column will be in the middle and the middle will be on the right.

Software evolves rapidly, far more rapidly that physical manufacturing industries. Again, in order to address their enterprise mission, Revolution Analytics’ management is going to have to address what’s needed to move towards the right columns that mean an enterprise adoption.

Enterprise Scalability: A Good Start

One thing they’ve done very well is to build out the product suite to attract different sized businesses, individual departments and others with a scaled product suite to attract a wider audience.

Revolution Analytics product suite

Revolution Analytics product suite

They seem to have done a good job of providing a layered approach from free use of open source to enterprise weight support. Any interested person should talk with them about the full details.

Summary

R is a very useful analytical tool and Revolution Analytics is working hard to provide business with the ability to use R in ways that help leverage the technology. They’re working hard to support groups who want pure free open source and others who want true enterprise support in the way other open source companies have succeeded in previous decades.

Their tool does seem powerful, but it is still clearly and admittedly targeted at the very technical user, the data priests.

Revolution Analytics seems to have a start to a good corporate mission and I think they know where they want to end up. The problems is that they haven’t yet created a strategy that will get them to meet their vision and mission.

If you are interested in using R to perform complex analysis, you need to talk to Revolution Analytics. They are strong in the present. Just be aware that you will have to help nudge them into the future.

TDWI and IBM on Predictive Analytics: A Tale of Two Focii

Usually I’m more impressed with the TDWI half of a sponsored webinar than by the corporate presentation. Today, that wasn’t the case. The subject was supposed to be about predictive analytics, but the usually clear and focused Fern Halper, TDWI Research Director for Advanced Analytics, wasn’t at her best.

Let’s start with her definition of predictive analytics: “A statistical or data mining solution consisting of algorithms and techniques that can be used on both structured and unstructured data to determine outcomes.” Data mining uses statistical analysis so I’m not quite sure why that needs to be mentioned. However, the bigger problem is at the other end of the definition. Predictive analysis can’t determine outcomes but it can suggest likely outcomes. The word “determine” is much to forceful to honestly describe prediction.

Ms. Halper’s presentation also, disappointingly compared to her usual focus, was primarily off topic. It dealt with the basics of current business intelligence. There was useful information, such as her referring to Dave Stodder’s numbers showing that only 31% of surveyed folks say their businesses have BI accessible to more than half their employees. The industry is growing, but slowly.

Then, when first turning to predictive analytics, Fern showed results of a survey question about who would be building predictive analytics. As she also mentioned it was a survey of people already doing it, there’s no surprise that business analysts and statisticians, the people doing it now, were the folks they felt would continue to do it. However, as the BI vendors including better analytics and other UI tools, it’s clear that predictive analytics will slowly move into the hands of the business knowledge worker just as other types of reporting have.

The key point of interest in her section of the presentation was the same I’ve been hearing from more and more vendors in recent months: The final admission that, yes, there are two different categories of folks using BI. There are the technical folks creating the links to sources, complex algorithms and reports and such, and there are the consumers, the business people who might build simple reports and tweak others but whose primary goal is to be able to make better business decisions.

This is where we turn to David Clement, Product Marketing Manager, BI & Predictive Analytics, IBM, the second presenter.

One of the first things out of the gate was that IBM doesn’t talk about predictive analytics but about forward looking business intelligence. While the first thought might be that we really don’t need yet another term, another way to build a new acronym, the phrase has some interesting meaning. It’s no surprise that a new industry where most companies are run by techies focused on technology, the analytics are the focus. However, why do analytics? This isn’t new. Companies don’t look at historic data for purely nostalgic reasons. Managers have always tried to make predictions based on history in order to better future performance. IBM’s turn of phrase puts the emphasis on forward looking, not how that forward look is aided.

The middle of his presentation was the typical dog and pony show with canned videos to show SPSS and IBM Cognos working together to provide forecasting. As with most demos, I didn’t really care.

What was interesting was the case study they discussed, apparel designer Elie Tahari. It’s a case study that should be studied by any retail company looking at predictive analytics as a 30% reduction of logistics costs is an eye catcher. What wasn’t clear is if that amount was from a starting point of zero BI or just adding predictive analytics on top of existing information.

What is clear is that IBM, a dinosaur in the eyes of most people in Silicon Valley and Boston, understands that businesses want BI and predictive analytics not because it’s cool or complex or anything else they often discuss – it’s to solve real business problems. That’s the message and IBM gets it. Folks tend to forget just how many years dinosaurs roamed the earth. While the younger BI companies are moving faster in technology, getting the ears of business people and building a solution that’s useful to them matters.

Summary

Fern Halper did a nice review of the basics about BI, but I think the TDWI view of predictive analytics is too much industry group think. It’s still aligned with technology as the focus, not the needs of business. IBM is pushing a message that matters to business, showing that it’s the business results that drive technology.

Businesses have been doing predictive analysis for a long time, as long as there’s been business. The advent of predictive analytics is just a continuance of the march of software to increase access to business information and improve the ability for business management to make timely and accurate decisions in the market place. The sooner the BI industry realize this and start focusing less on just how cool data scientists are and more on how cool it is for business to improve performance, the faster adoption of the technology will pick up.

NuoDB at the BBBT: Another One Bringing SQL to the Cloud

Today’s presentation in front of the BBBT was by NuoDB’s CTO, Seth Proctor. NuoDB is a small company with big investments. What makes them so interesting? It’s the same thing as in many of the other platform presenters at the BBBT. How do we get real databases in the Cloud?

Hadoop is an interesting experiment and has clearly brought value to the understanding of massive amounts of unstructured data. The main value, though, remains that it’s cheap. The lack of SQL means it’s ok for point solutions that don’t stress its performance limitations. Bringing enterprise database support to the cloud is something else.

The main limitation is that Hadoop and other unstructured databases aren’t able to handle transactional systems while those still remain the major driver in operating businesses.

NuoDB has redesigned the database from the ground up to be able to run distributed across the internet. They’ve created a peer-to-peer structure of processes, with separate processes to manage the database and SQL front end transaction issues.

Seth pointed out that they ““Have done nothing new, just things we know put together in a new way.” He also pointed out they have patents. My gripe about patents for software is an issue for another day, but that dichotomous pairing points to one reason (Apple’s patent on a rounded rectangle is another example of the broken patent system, but off the soap box and onwards…).

It’s clear that old line RDMS systems were designed on major, on-premise servers. The need for a distributed system is clear and NuoDB is on the forefront of creating that. One intriguing potential strength, one about which there wasn’t time to discuss in the presentation, is a statement about the object-oriented structure needed for truly distributed applications.

Mr. Proctor stated that the database schema is in object definitions, not hard coded into the database. He added that provides more flexibility on the fly. What it also could mean is that the schema isn’t restricted to purely RDBMS schemas and that future versions of their database could support columnar and even unstructured database support. For now, however, the basic ability to change even a standard row-based relational database on the fly without major impacts on performance or existing applications is a strong benefit.

As the company is young and focused on the distributed aspects of performance, it was also admitted that their system isn’t one for big data, even structures. They’re not ready for terabytes, not to mention petabytes of data.

The Business

That’s the techie side, but what about business?

The company is focused on providing support for distributed operational systems. As such, Seth made clear they haven’t looked at implementations supporting both operational and analytical systems. That means BI is not a focus and so the product might not be the right system for providing high level business insight.

In addition, while I asked about markets I mainly got an answer about Web sites. They seem to think the major market isn’t Global 1000 businesses looking for link distributed operational systems but that Web commerce sites are their sweet spot. One example referred to a few times was in transactional systems for businesses selling across a country or around the world. If that’s the focus, it’s one that needs to be made more explicit on their web site, which really doesn’t discuss markets in the least.

It’s also an entry into the larger financial markets space. It and medical have always been two key verticals for new database technologies due to the volumes of information. That also means they need to prioritize the admitted lack of large database support or they’ll hit walls above the SMB market.

The one business thing the bothers me is their pricing model. It’s based on the number of hosts. As the product is based on processes, there’s no set number of processes per host. In addition, they mentioned shared hosting, places such as AWS, where hosts may be shared by multiple of NuoDB’s customers or where load balancing might take your processes and have them on one host one day and multiple hosts the next.

Host base pricing seems to be a remnant of on-premises database systems that Cloud vendors claim to be leaving. In a distributed, internet based setup, who cares how big the host is, where the host is, or anything else about the host? The work the customer cares about is done by the processes, the objects containing the knowledge and expertise of NuoDB, not the servers owned by the hosting firm. I would expect that Cloud companies would move from processors to process.

Summary

NuoDB is a company focused on reinventing the SQL database for the Cloud. They have significant investment from the VC and business markets. However, it would be foolish to think that Oracle, IBM and other existing mainstream RDBMS vendors aren’t working on the same thing. What NuoDB described to the BBBT used most of the right words from the technology front and they’re ramping up their development based on the investments, but it’s too early to say if they understand their own products and markets enough to build a presence for the long term.

They have what looks like very interesting technology but, as I keep repeating in review after review, we know that’s not enough.

TDWI Webinar Review: Business- Driven Analytics. Where’s business?

Today’s TDWI webinar was an overview of their latest best practices report. The intriguing thing was the numbers show that BI & Analytics still aren’t business driven. As Dave Stodder, Director of Research for Business Intelligence, pointed out, there are two key items contradicting that. First, more than half of companies have BI in less than 30% of the organization, pointing out that a large number of businesses aren’t prioritizing BI. Second, most of the responses to questions about BI show that it’s still something controlled and pushed by IT.

One point Dave mentioned was still the overwhelming presence of spreadsheets. They aren’t going away soon. A few vendors who have presented at the BBBT have also pointed out their focus at integrating spreadsheets rather than ignoring all the data that resides in them or demanding everything be collected in a data repository. The sooner more vendors realize they need to work with the existing business infrastructure rather than fight against it, the better off the industry will be.

Another interesting point was the influence of the CMO. I regularly read about analysts and others talking about how the “CMO has a bigger IT budget than the CIO!” The numbers from the TDWI survey don’t bear that out. One slide, a set of tables representing different CxO level positions’ involvement in different areas of the IT buying process show the CMO up near the CIO for identifying the need, but far behind in every other category – categories that include “allocate budget” and “approve budget.” In tech firms, and especially in Silicon Valley, people look around at other firms involved in the internet and forget they’re a small subset of the overall market.

Another intriguing point was brought out in the survey. Of companies with Centers of Excellence or similar groups to expand business intelligence, the list of titles involved in those groups shows an almost complete dearth of business users. It seems that IT still thinks of BI as a cool toy they can provide to users, not something that business users need to be involved in to ensure the right things are being offered. Only 15% show line of business management involved while a pathetic 4% show marketing’s involvement.

The last major point I’ll discuss is an interesting but flawed question/answer table. The question was on how the business-side leadership is doing during different aspects of a BI project. The numbers aren’t good. However, as we’ve just discussed, business isn’t included as much as they should be. There are two things that make me consider:

  • What would the pair of charts look like if the chart was split to look at how IT and business respondents each look at the question?
  • Is it an issue of IT not involving business or business not getting involved when opportunities are presented?

Summary

TDWI’s overview of the current state of business-driven BI & analytics seems to show that there’s a clear demand from the business community but there doesn’t seem to be the business involvement need to finish the widespread expansion of BI into most enterprises.

What I’d like to see TDWI focus on next is the barriers to that spread, the things that both IT and business see as inhibitors to expanding the role of modern BI tools in the business manager’s and CxO suite’s daily decision making.

It’s a good report, but only as a descriptive analysis of current state. It doesn’t provide enough information to help with prescriptive action.

EXASOL at the BBBT: Big Data, fast database. Didn’t I just hear this?

Friday’s EXASOL presentation to the BBBT brought a strong feeling of déjà vu. I’ve already blogged about the Tuesday Actian presentation and, to be honest, there were technical differences but I came to the same conclusion about the business model. But first, a thanks to Microsoft for the autocorrect feature. Otherwise typing EXASOL in all caps each time would have been bothersome.

The EXASOL presenters were Aaron Auld (@AaronAuldDE), CEO, and Kevin Cox (@KJCox), Director Sales and Marketing.

I mentioned technical differences. First, and foremost, they didn’t start with hardware but with an initial algorithm for massively parallel processing (MPP). They figured it was a great way to speed up database performance and stuck with columnar oriented relational technology. That’s allowed them to work on multi-terabyte systems with fast performance.

They have published some great TPC-H benchmark numbers, often being two orders of magnitude better than the competitors. While admitting that TPC stats are questionable since they’ve been defined by the big vendors to benefit their performance, often don’t reflect real life queries and often don’t use typical hardware, the numbers were still impressive. In addition, it was a smart business move as a small company blowing away the big vendors’ benchmarks helps elevate visibility and get them into doors.

However, let’s look back at Actian. They also talked about TPC, but they used the TPC-DS benchmark. How do you compare? Well, you can’t.

One other TPC factoid is, just like their competitor, there’s no clear information on true multi-user performance in today’s mobile age. No large numbers of connected clients was mentioned.

So results are great, but how do they fight the Hadoop bandwagon? They understand that open source is cheaper from a license standpoint, but also point out their performance saves in direct comparison when you total all costs for an implementation. People forget that while hardware prices have dropped, servers aren’t free.

Unfortunately, from a business model, it looks like they’re making the typical startup mistake of focusing on their product rather than business needs. They understand that ROI matters, but it seems to be too far down the list in their corporate messaging.

Another major advantage they have in common with the previous presenters is the sticking with SQL involves an easier build of ecosystem to include the existing vendors from ETL through visualization. However, they seem to be a bit further behind the curve in building those partnerships. While they have a strong strategic understanding of that, they need to bubble it up the priority list.

Exasol platform offering

One critical business success they have is their inclusion in the Dell Founders Club 50. That means advice and cooperation from Dell to help improve their performance and expand their presence. For a small company to have access not only to Dell at the technical level but also to bring customers to Dell Solution Centers for demonstrations is a great thing.

While they have been focused on MPP and large customers, the industry move to the Cloud also means they are looking at smaller licensing including a potential one-node free trial.

However, as mentioned in the lead, they seem to have the same business model issue as their competitors: They’re focused on the bleeding edge market who think the main message is performance. While they know there are other aspects to the buying decision, they went back, again and again, to performance. They have the whole picture in mind, but they’re not yet thinking of the mass market.

Organizations such as TDWI, Gartner and Forrester have all reported the high percentage of organizations that are considering big data and how to get a handle on the vast volume of information coming from heterogeneous sources. There’s clearly demand building up behind the dam. The problem seems to be they’re trying, as major IT organizations always do, to understand how best to integrate new technologies and capabilities with as little pain as possible. Meanwhile, the vendors seem to still be focused on the early adopters with their messaging. That leaves dollars on the table and slows adoption of new technology.

Summary

EXASOL seems to have a strongly performing and highly scalable database technology to work with large data sets. Yet, like many companies in the business intelligence space it comes back to audience. Are they still aiming at early adopters or will they focus on the mass market?

Have BI and big data advanced to the point where people need to think about the chasm and how to better address business needs not just technical issues. I think so, and I hope they adjust their business focus.

The company seems to have great potential, but will they turn that into reality? As the great Yogi Berra said, “It’s like deja vu all over again.”