Tag Archives: bbbt

Diyotta: Data integration for the enterprise

I’m still catching up and reviewed a video of last month’s Diyotta presentation to the BBBT. The company is another young, founded in 2011, data integration company working to take advantage of current technologies to provide not just better data integration but also better change management of modern data infrastructures. In many ways, they’re similar to another company, WhereScape, which I discussed last year. Both are young and small, while the market is large and the need is great.

The presentation was given by Sanjay Vyas, CEO, and John Santaferraro, CMO. The introduction by Sanjay was one of the best from a small company founder that I’ve seen in a long time. He gave a brief overview of the company, its size, it’s global structure (with HQ in Charlotte, NC, and two offshore development centers). Then he went straight to what most small companies leave for last: He presented a case study.

My biggest B2B marketing point is that you need to let the market know you understand it. Far too many technical founders spend their time talking about the technology they built to solve a business problem, not the business problem that was addressed by technology. Mr. Vyas went to the heart of the matter. He showed the pain in a company, the solution and, most importantly, the benefits. That is what succeeds in business.

It also wasn’t an anonymous reference, it was Scotiabank, a leading Canadian bank with a global presence. When a company that large gives a named reference to a startup as small as is Diyotta, you know the firm is happy.

John Santaferraro then took over for a bit with mostly positive impact. While he began by claiming a young product was mature because it’s version 3.5, no four year old firm still working on angel investments has a fully mature product. From the case study and what was demo’d later, it’s a great product but it’s clear it’s still early and needs work. There’s no need to oversell.

The three main markets John said Diyotta aims at are:

  • Big data analytics.
  • Data warehouse modernization.
  • Hybrid data integration including cloud and on-premises (though John was another marketing speaker who didn’t want to use the “s” at the end).

While the other two are important, I think it’s the middle one that’s the sweet spot. They focus on metadata to abstract business knowledge of sources and targets. While many IT organizations are experimenting with Hadoop and big data, getting a better understanding and improved control over the entire EDW and data infrastructure as big data is added and new mainline techniques arrive is where a lot more immediate pain exists.

Another marketing miss that could have incorporated that key point was when Mr. Santaferrero said that the old ETL methods no longer work because “having a server in the middle of it … doesn’t exist anymore.” The very next slide was as follows.

Diyotta markitechture slide

Diyotta still seems to have a server in the middle, managing the communications between sources and targets through metadata abstraction. The little “A’s” in the data extremities are agents Diyotta uses to preprocess requests locally to optimize what can be optimizes natively, but they’re still managed by a central system.

The message would be more powerful by explaining that the central server is mediating between sources and targets, using metadata, machine learning and other modern tools, to appropriately allocate processing at source, in the engine or in the target in the most optimal way.

While there’s power in the agents, that technology has been used in other aspects of software with mixed results. One concern is that it means a high need for very close partnerships with the systems in which the agents reside. While nobody attending the live presentation asked about that, it’s a risk. The reason Sanjay and John kept talking about Netezza, Oracle and Teradata is because those are the firms whose products Diyotta has created agents. Yes, open systems such as Hadoop and Spark are also covered, but agents do limit a small company’s ability to address a variety of enterprises. The company is still small, so as long as they focus on firms with similar setups to Scotiabank, they have time to grow, to add more agents and widen their access to sources; but it’s something that should be watched.

On the pricing front, they use pricing purely based on the hub. There’s no per user or per connector pricing. As someone who worked for companies that used pricing that involved connectors, I say bravo! As Mr. Vyas pointed out, their advantage is how they manage sources and targets, not which ones you want them to access. While connecting is necessary, it’s not the value add. The pricing simplifies things and can save money compared with many more complex pricing schemes that charge for parts.

The final business point concerns compliance. An analyst in the room (Sorry, I didn’t catch the name) asked about Sarbanes-Oxley. The answer was that they don’t yet directly address compliance but their metadata will make it easier. For a company that focuses on metadata and whose main reference site is a major financial institution, it would serve their business to add something to explicitly address compliance.

Summary

Diyotta is a young company addressing how enterprises can leverage big data as target and source alongside the existing infrastructure through better metadata management and data access. They are young and have many of the plusses and minuses that involves. They have some great technology but it’s early and they’re still trying to figure out how to address what market.

The one major advantage they have, given what I’ve seen in only a two hour presentation, is Sanjay Vyas. Don’t judge a startup on where they are now or where you think they need to be. Judge them on whether or not management seems capable of getting from point A to point B. Listening to Mr. Vyas, I heard a founder who understands both business and technology and will drive them in the direction they need to go.

IBM, you BM, we all BM for … Spark!

IBM at BBBT

A recent presentation by IBM at the BBBT was interesting. As usual, it was more interesting to me for the business information than the details. As unusual, they did a great job in a balanced presentation covering both. While many presentations lean too heavily in one direction or the other, this one covered both sides very well.

The main presenter was Harriet Fryman, VP of Marketing, IBM Analytics Platform. Adding information during the presentation were Steven Sit, Director of Product Management, Open Source Based Analytics Systems, and Steve Beier, Program Director, Spark Technology Center.

The focus of the talk was IBM’s commitment to Apache Spark. Before diving deep into the support, Ms. Fryman began by talking about business’ evolving data needs. Her key point is that “we all do data hording,” that modern technologies are allowing us to horde far more data than ever before, and that better ways are needed to get value out of the data.

She then proceeded to define three key aspects of the growth in analytics:

  • Applying analytics in more parts of business.
  • Understand the time value of data.
  • The growth of machine learning and cognitive systems.

The second two overlap, as the ability to analyze large volumes of data in near real-time means a need to have systems do more analysis. The following slide also added to IBM’s picture of the changing focus on higher level information and analytics.IBM slide - evolving approach to data

The presentation did go off on a tangent as some analysts overthought the differences in the different IBM groups for analytics and for Watson. Harriet showed great patience in saying they overlap, different people start with different things and internal organizational structures don’t impact IBM’s ability to leverage both.

The focus then turned back to Spark, which IBM sees as the unifying layer for data access. One key issue related to that is the Spark v Hadoop debate. Some people seem to think that Spark will replace Hadoop, but the IBM team expressed clear disagreement. Spark is access while Hadoop is one data structure. While Hadoop can allow for direct batch processing of large jobs, using Spark on top of Hadoop allows much more real time processing of the information that Hadoop appropriately contains.IBM slide - Spark markitechture

One thing on the slide that wasn’t mentioned but links up with messages from other firms, messages which I’ve supported, is that one key component, in the upper left hand corner of the slide, is Spark SQL. Early Hadoop players were talking about no-SQL, but people are continuing to accept that SQL isn’t going anywhere.

Well, most people. At least fifteen minutes after this slide was presented, an attending analyst asked about why IBM’s description of Spark seemed to be similar to the way they talk about SQL. All three IBM’ers quickly popped up with the clear fact that the same concepts drive both.

While the team continued to discuss Spark as a key business imitative, Claudia Imhoff asked a key question on the minds of anyone who noticed huge IBM going to open source: What’s in it for them? Harriet Fryman responded that IBM sees the future of Spark and to leverage it properly for its own business it needed to be part of the community, hence moving SystemML to open source. Spark may be open source, but the breadth and skills of IBM mean that value added applications can be layered on top of it to continue the revenue stream.

Much more detail was then stated and demonstrated about Spark, but I’ll leave that to the more technical analysts and vendor who can help you.

One final note put here so it didn’t distract from the main message or clutter the summary. Harriet, please. You’re a great expert and a top marketing person. However, when you say “premise” instead of “premises,” as you did multiple times, it distracts greatly from making a clear marking message about the cloud.

Summary

IBM sees the future of data access to be Apache Spark. Its analytics group is making strides to open not only align with open source, but to be an involved player to help the evolution of Spark’s data access. To ignore IBM’s combined strength in understanding enterprise business, software and services is to not understand that it is a major player in some of the key big data changes happening today. The IBM Spark initiative isn’t a marketing ploy, it’s real. The presentation showed a combination of clear business thought and strategy alongside strong technical implementation.

SAS: Out of the Statisticians Pocket and into the Business Briefcase

I just saw an amusing presentation by SAS. Amusing because you rarely get two presenters who are both as good at presenting and as knowledgeable about their products. We heard from Mike Frost, Senior Product Manager for Data Management, and Wayne Thompson, Chief Data Scientist. They enjoy what they do and it was contagious.

It was also interesting from the perspective of time. Too many younger folks think if a firm has been around for more than five years, it’s a dinosaur. That’s usually a mistake, but the view lives on. SAS was founded almost 40 years ago, in 1976, and has always focused on analytics. They have been historically aimed at a market that is made up of serious mathematicians doing heavy statistical work. They’re very good at what they do.

The business analysis sector has been focused on less technical, higher level business number crunching and data visualization. In the last decade, computing power has meant firms can dig deeper and can start to provide analysis SAS has been doing for decades. The question is whether or not SAS can rise to the challenge. It’s still early, but the answer seems to be a qualified but strong “yes!”

Both for good and bad, SAS is the largest privately held software company, still driven by founder James Goodnight. That means a good thing in that technically focused folks plow 23% of their revenue into R&D. However, it also leaves a question mark. I’ve worked for other firms long run by founders, one a 25+ year old firm still run by brothers. The best way to refer to the risk is that of a famous public failure: Xerox and the PC. For those who might not understand what I’m saying, read “Fumbling the Future” by Smith and Alexander. The risk comes down to the people in charge knowing the company needs to change but being emotionally wedded to what’s worked for so long.

The presentation to the BBBT shows that, while it’s still early in the change, SAS seems to be mostly avoiding that risk. They’re moving towards a clean, easier to use UI and taking their first steps towards collaboration. More work needs to be done on both fronts, but Mike and Wayne were very open and honest about their understanding the need and SAS continuing to move forward.

One of the key points by Mike Frost is one I’ve also discussed. While they disagree with me and think the data scientist does exist, the SAS message is clear that he doesn’t work in a void. The statistician, the business analyst and business management must all work in concert to match technical solutions to real business information needs.

LASR, VAE and a cast of thousands

The focus of the presentation was on SAS LASR, their in-memory analytics server. While it leverages Hadoop, it doesn’t use MapReduce because that involves disk access during processing, losing the speed advantage of in-memory applications.SAS LARS archictecture slide

As Mike Frost pointed out, “It doesn’t do any good to run the right model too late.”

One point that still shows the need to think more about business, is that TCO was mentioned in passing. No slide or strong message supported the message. They’re still a bit too focused on technology, not what sells the business decision makers on business intelligence (BI).

Another issue was the large number of ancillary products in the suite, including Visual Data Explorer, Data Loader and others. The team mentioned that SAS is slowly moving through the products to give them the same interface, but I also hope they’re looking at integrating as much as possible so the users don’t have the annoyance of constantly moving between products.

One nice part of the demo was an example of discussing what SAS has termed “poorly structured data” as opposed to “unstructured data” that’s the rage in Hadoop. I prefer “loosely structured data.” Mike and Wayne showed the ability to parse the incoming file and have machine intelligence make an initial pass at suggesting fields. While this isn’t new, I worked at a company in 2000 that was doing that, it’s a key part of quickly integrating such data into the business environment. The company I reference had another founder who became involved in other things and it died. While I’m surprised it took firms so long to latch onto and use the technology, it doesn’t surprise me that SAS is one of the first to openly push this.

Another advantage brought by an older, global firm, related to the parsing is that it works in multiple languages, including right-to-left languages such as Hebrew and Arabic. Most startups focus on their own national language and it can be a while before the applications are truly global. SAS already knows the importance and supports the need.

Great, But Not Yet End-To-End

The only big marketing mistake I heard was towards the end. While Frost and Thompson are rightfully proud of their products, Wayne Thompson crowed that “We’re not XXX,” a reference to a major BI player, “We’re end-to-end.” However, they’d showed only minimal visualization choices and their collaboration, admittedly isn’t there.

Even worse for the message, only a few minutes later, based on a question, one of the presenters shows how you export predicted values so that visualization tools with more power can help display the information to business management.

I have yet to see a real end-to-end tool and there’s no reason for SAS to push this iteration as more than it is. It’s great, but it’s not yet a complete solution.

Summary

SAS is making a strong push into the front end of analytics and business intelligence. They are busy wrapping tools around their statistical engines that will allow them to move much more strongly out of academics and the very technical depths of life sciences, manufacturing, defense and other industries to challenge in the realm of BI.

They’re headed in the right direction, but the risk mentioned at the start remains. Will they keep focused on this growing market and the changes it requires, or will that large R&D expenditure focus on the existing strengths and make the BI transition too slow? I’m seeing all the right signs, they just need to stay on track.

 

Dell at BBBT: Addressing BI from IT

The most recent BBBT presentation was from Dell Software. Peter Evans, Sr. Integrated Solutions Development Consultant , and Steven Phillips, Product Marketing Manager – Big Data & Analytics, gave us an overview of Dell’s architecture for addressing business intelligence (BI).  Dell platform slide 2015-05-15

What they’re working to accomplish is, no surprise, ensure that Dell’s hardware is able to be present throughout the BI supply chain. For that, they’re working to be application agnostic, though they mislabel it as “no lock-in.” What they’re saying is you can change your software vendors and Dell will still be there. There’s no addressing true lock-in, the difficulty in changing one software vendor to another based on level of openness to data in systems and other costs of moving.

One marketing nit that caught a number of us was Peter’s early claim that Dell is “probably the third largest software company in the world.” Right… First, as a now privately held company, we have no way to confirm that. Second, I’m not sure if he knows just how much revenue is needed to be near the top of that list.

IT First

Far too many young firms are overselling BI as something that will let business “avoid IT.” That’s not only impossible, it wouldn’t make sense if it was possible. IT has a clear place in organizing infrastructure, providing consistency, helping with compliance and doing other things a central organization should do.

Dell has started with IT. They’re used to dealing with IT and their solution is focused on helping IT enable business. What’s not clear is how well they can do such a thing in the new world. They’ve pieced a lot of different applications into an architecture and that would seem to require heavy IT involvement in much of what’s being provided.

On the good side, that knowledge means they better understand true enterprise business needs. Unlike many vendors, Dell has regulatory and statutory compliance at the forefront, very clear in its marketechture slides. While most companies understand they have to mention compliance, it’s usually people dealing with corporate business groups such as IT and legal who understand just how critical compliance is.

Neither Peter Evans nor Steven Phillips spoke clearly to the business user, the want for speed and flexibility for them. While younger companies need to move more to addressing the importance of IT, Dell needs to more strongly focus on the business customer, the ones who are often in charge of the BI and related software projects and spending.

Boomi Suggest

The technical piece that stuck with me the most was the discussion of Boomi Suggest. Boomi is Dells integration tool. Within it, there’s a cloud-based tool called Boomi Suggest. If users subscribe to it, the product tracks data linkages and the de-natured information is kept to help other customers more quickly map data sources and targets.

Mr. Evens says that Boomi Suggest has a database that now contains more than 16 million links. The intelligence on top to that then is able to provide a 92% accuracy rate in analyzing new links. The time savings that alone suggests is a major decision driver that should not be overlooked.

A Great Case Study: Asthma

While the case study didn’t address enough of the end user issues of timeliness, flexibility and more, it was a very interesting case study from an inclusive standpoint. The Dell team focused on asthma case management to show the breadth of data sources, the complexity of analytics and a full process that could be generalized from the healthcare sector in order to support their full platform message.Dell asthma case study slide 2015-05-15

As you can see, they are doing a lot of things with a variety of information, but they’re also doing it with a variety of products.

Summary

Dell’s decades of working with IT has helped it look at BI with a more complex eye that can address many of IT’s concerns. What we saw was an almost completely IT solution and message. While BI focused companies are going to have to move down and address important IT messages, Dell must go in the opposite direction. Unless the team can broaden their message to address the solution to more business teams, Dell’s expansion in the market will be severely limited because it’s the business groups that write the checks.

The presentation shows a great start. However, the questions are if Dell can simplify the architecture to make it less complex, potentially by merging a number of their products, and whether or not they can learn about those folks they don’t have a history of directly understanding: The business user. If they can do that, the start will expand and Dell Software can help in the BI market.

Looker at the BBBT: A New Look at SQL Performance

The most recent BBBT presentation was by Looker. Lloyd Tabb, Founder & CTO, and Zach Taylor, Product Marketing Manager, showed up to display yet another young company’s interesting technology.

Looker’s technology is an application server that sits above relational databases to provide faster, more complex queries. They’ve developed their own language, LookML to help with that. That’s no surprise, as Lloyd is a self-described language guy.

It’s also no surprise that the demos, driven by both Lloyd and Zach, were very coding heavy. Part of the reason that very technical focus exists is, as Mr. Tabb stated, that Looker thinks there are two groups of users: Coders who build models and business managers who use the information. There is no room in that model for the business analyst, the person who understands who to communicate a complex business need to the coders and how to help the coders deliver something that is accessible to and understandable by the information consumer.

How the bifurcation was played out in the demonstration was through an almost exclusive focus on code, code and more code, with a brief display of some visualization technology. The former was very good while the later wasn’t bad but, to fit with their mainly technology focus, had complex visualizations without good enough legends – they were visualizations that would be understood by technical people but need to be better explained for the business audience they claim to address.

As an early stage company, that’s ok. The business intelligence (BI) market is still young and very fragmented. You can get different groups in large companies using different BI tools. While Looker talks about 300 customers, as with most companies of their size it could only be those small groups. If they’re going to grow past those groups, they need to focus a bit more in how to better bridge technology and business.

They also have a good start in attracting the larger market because they support both cloud and on-premises systems. The former market is growing while the later isn’t going away. Providing the ability for their server to run either place will address the needs of companies on either side of the divide.

RDMS ≠ SQL

One key to their system is they don’t move data. It stays resident on the source systems. Those could be operational systems, data warehouses, an ODS or whatever. What they must have is SQL. When asked about Hadoop and other schema-on-write systems, the Looker team stated they are an RDMS based application but they’ll work on anything with SQL access. I have no problem with the technology, but they need to be very clear about the split.

SQL came from the relational world, but as they pointed out in an aside, it isn’t limited to that. They should drop the RDMS message and focus on SQL. As Lloyd Tabb said, “SQL is the right abstraction.” What I don’t know if he understands, being focused on technology and having those biases, is it isn’t the right abstraction because of some technical advantage but because it’s the major player. McDonalds isn’t the best burger because it has the most stores. SQL might not be the best access method, but it’s the one business knows and so it’s the one the newer database companies and structures can’t ignore.

Last year, the BBBT heard from multiple companies including Actian and EXASOL, companies focused on providing SQL access to Hadoop. That’s as important as what Looker is doing. The company that manages to do both well with jump ahead of the pack.

Summary

Looker is a good, young company with some technical advantages that can greatly improve the performance of SQL queries to business databases and provides a basic BI front end to display the results. I’m not sure they have the resources to focus on both, and I think the former have the clearest advantage in the marketplace. Unless they have more funding and a strong management team that can begin to better understand the business side of the market, they will have problems addressing the visualization side of BI. They need to keep improving their engine, spread it to access more data sources, and partner with visualization companies to provide the front end.

Rocket Software at BBBT: A Tale of Two Products

Last Friday’s BBBT presentation by an ensemble cast from Rocket Software was interesting, in both good and bad meanings of that word. They have some very interesting products that address the business intelligence (BI) industry, but they also have some confusion.

Bob Potter, SVP and GM, Business Intelligence, opened the presentation by pointing out that Rocket has more than $300 million (USD) in annual revenue yet many tech folks have never heard of them. One reason for the combination is they’ve done a good job in balancing both build and buy decisions to provide niche software solutions in a variety of places and on a number of platforms. Another is a strong mainframe focus. The third is that they don’t seem to know how to market. Let’s focus on just the two products presented to demonstrate all of these.

Rocket Data Virtualization

Most of the presentation was focused on Rocket Data Virtualization (DV). There are two issues it addressed. The first is accessing data from multiple sources without the need to first build a data warehouse. DV is the foundation of what was first thought of as the federated or virtual data warehouse. It’s useful. Gregg Willhoit, Managing Director, Research & Development, gave a good overview of DV and then delved into the product.

Rocket Data Virtualization is a mainframe resident product to enhance data virtualization, running on IBM z. While this has the clear market limit of requiring a company large enough to have a mainframe, it’s important to consider this. There are still vast amounts of applications running on mainframes and it’s not just old line Cobol. Mainframes run Unix, Linux and other OS partitions to leverage multiple applications.

An important point was brought up when Gregg was asked about access to the product. He said that Rocket is working with other BI industry partners, folks who provide visualization, so that they can access the virtualized data.

However, if you want to know more about the product, good luck. As I’ll discuss in more detail later, if you go to their site you’ll find all marcom fluff. It’s good marcom fluff, but driving deeper requires downloads or contacting sales people. That doesn’t help a complex enterprise sale.

Rocket Discover

The presentation was turned over to Doug Anderson, Solutions Engineer, for a look at their unreleased product Rocket Discover. It’s close, in beta, but it’s not yet out.

As the name implies, Rocket Discover is their version of a visualization tool. It’s a very good, basic tool that will compete well in the market except for two key things. The first is that they claimed Rocket is aiming at “high level executives” and that’s not the market. This is a product for business analysts. Second, while it has the full set of features that modern analysts will want, it’s based on a look and feel that’s at least a decade old.

On the very positive side, they do have a messaging feature built in to help with collaboration. It needs to grow, but this is a brand new product and they have seen where the market is going and are addressing it.

Another positive sign is this isn’t a mainframe product. It runs on servers (unspecified) and they’re starting with both on-premises and cloud options. This is a product that clearly is aimed at a wider market than they historically have addressed.

While they have understood the basics of the technology, the question is whether or not they understand the market. One teaser that shows that they probably don’t was brought up by another analyst who pointed out that Doug and others were often referring to the product as just Discover. Oracle has had a Discover product for many years. While Rocket might not have seen it on the mainframe, there will be some marketing issues if the company doesn’t always refer to the product as Rocket Discover, and they might have problems anyway. Their legal and marketing teams need to investigate quickly – before release.

Enterprise IT v Enterprise Software: Understanding the Difference

The product presentation and a Q&A session that covered more issues with even more folks from Rocket taking part, show the problems Rocket will have. As pointed out, the main reasons that so many people have never heard of Rocket is it sells very technical solutions to enterprise IT. Those are direct sales to a very technical audience. However, enterprise software is more than enterprise IT.

Enterprise software such as ERP, CRM, SFA and, yes, BI, address business issues with technology. That means there will be a complex sales cycle involving people from different organizations, a cycle that’s longer and more involved than a pure sale to IT. I’m not sure that Rocket has yet internalized that knowledge. As mentioned above, their website is very fluffy, as if the thought is that you put something pretty (though I argue against the current fad of multiple bands requiring scrolling, it’s neither pretty nor easy to use) with mission and message only, then you quickly get your techies talking directly to their techies, is the way you sell. Perhaps when talking with techies only, but not in an enterprise sale.

That’s my biggest gripe about the software industry not understanding the need for product marketing. You must be able to build a bridge to both technical and business users with a mix of collateral and content that span the gap. I’m not seeing that with Rocket.

In addition, consider the two products and the market. DV is very useful and there are multiple companies trying to provide the capability. While Rocket’s knowledge of and access to mainframe data is a clear advantage, the fact the product only runs on mainframes is a very limiting competitive message. I understand they have tied their horses very closely to IBM, and it makes sense to have a z option, but to not provide multiple platforms or a way for non-mainframe customers to use their more general concepts and technologies will retard growth.

If their plan is to provide what they know first then spread to other platforms, it’s a good strategy; but that wasn’t discussed.

Both products, though, have the same marketing issue. Rocket needs to show that it understands it is changing from selling almost exclusively to enterprise IT and needs to create a more integrated product marketing message to help sell to the enterprise.

There’s also the issue of how to balance the messages for the two products. For Rocket Data Virtualization to succeed, it really does need to work with the key BI vendors. Those companies will wonder about Rocket’s dedication to them while Rocket Discover exists. Providing a close relationship with those vendors will retard Rocket Discover’s growth. Pushing both products will be walking a tightrope and I haven’t seen any messaging that shows they know it.

Summary

Rocket is a company that is very strong on technology that helps enterprise IT. Both Rocket Data Virtualization and Rocket Discover have the basics in place for strong products. The piece missing is an understanding of how to message the wider enterprise market and even the mid- and small-size company markets.

Rocket Data Virtualization is the product that has the most immediate impact with the clear differentiation of very powerful access to mainframe data and the product I think should make the more rapid entrant into its space. The question is whether or not they can spread platform support past the mainframe faster than other companies will realize the importance of mainframe data. In the short term, however, they have a great message if they can figure out how to push it.

Rocket Discover is a very good start for a visualization tool, but primarily on the technology side. They need to figure out how to jump forward in GUI and into predictive and other analytics to be truly successful going forward, but the market is young and they have time.

The biggest issue is if Rocket will learn how to market and sell in broader enterprise and SMB sales, both to better address the multiple buyers in the sales cycle and to better communicate how both products interact in a complex market place.

Rocket is worth the look, they just need to learn how to provide the look to the full market.

JInfonet at the BBBT: OEM or Direct, a Decision is Necessary

Let’s cut to the chase, this is another company with a very good product and no idea how to message. Unless they quickly figure out and communicate the right message, they’ll need to get ready for acquisition as an exit strategy.

Jinfonet is a company founded, it seems, to clone Crystal Reports in Java. Hence the awkward name. JReport, their product, is full featured and we’ll get to that, but the legacy name using report will leave them behind if that remains their focus.

The presentation was primarily by Dean Yao, Director of Marketing, with demo support brought by the able Leo Zhao, Senior Systems Consultant. However, the presentation indicated the message problem.

Reports? What Reports?

The name of the product is JReports, but at no time in the three hours did a report make an appearance. They showed two different analyst charts, Nucleus Research and EMA, of the business intelligence (BI) industry to show where they were placed. BI. Yet when asked about competition, Dean Yao repeatedly mentioned they didn’t compete against BI vendors but focused on reports.

Their own presentation begs to differ:

JReport solution areas

Notice that reports are a secondary feature of one focus.

What’s also good and bad is that Leo Zhao’s demonstrations showed a very richly featured product that does compete against the other vendors. The only major hole wasn’t in functionality, it’s that the rich set of visualizations weren’t as pretty as most of the competition. That is in part because they are self-funded with more limited resources and partly because they’re great techies who haven’t prioritized visualizations as they should.

OEM or Direct?

OEM, in JInfonet’s business model, doesn’t only mean the product embedded in third party applications. Mr. Yao discussed how JReport is also regularly embedded in departmental IT applications. That is different than when companies use JReport as a standalone product.

Dean talked about how 30% of their business in recent years was direct, with the rest being OEM. At the same time, he mentioned that last year was around 50/50. That’s not a problem. What is an issue is that they don’t know why it was. Did sales focus on direct? Was one major direct client a large revenue outlier which skewed the results? They don’t seem to know.

That matters because the OEM and direct models are very different. With OEM, you let the other company deal with business messages. All you’re doing is presenting to them a good technical story and cost point compared do simpler products, a tiny segment of competition or doing nothing and losing out to their competitors.

Enterprise sales, on the other hand, require a focus on the end user, the folks using the products and the business issues they have. That is what’s missing from the presentation, their web site and the few pieces of collateral I reviewed.

One thing should also be said about the OEM to departments model. The cloud is changing the build v buy balance for many departments for the applications in which JReport is embedded, so I’m not sure how much longer this model will be of significant revenue.

Mr. Yao said they don’t do enterprise sales, but just sell to SMB and enterprise departments, so that means they’re not really competing against other BI vendors. A lot of the analysts on the call quickly jumped on that, pointed out that even one of the largest companies openly talks about its strategy of land and expand. “Just land” is not a long term strategy.

What’s that mean?

Right now the enterprise market is very fragmented, so there’s a space for a small company, but that won’t last long. Crystal Reports had a long run based on the technologies of the day, but it no longer is independent. Today, things are changing far more rapidly. The cloud is allowing BI firms to address small to global companies with similar products and the major players (and most smaller ones) are focused on that full business market.

Given the current product, JInfonet can go one of two ways. They can decide to completely focus on OEM, keep a technical message and just sell enterprise as it happens.

The other option, one I openly prefer, is that they realize that they have a very good product that does compete in the direct model and they need to focus more messaging. They can still provide to OEM, but that’s easier – it’s a subset of the full featured message.

The solution, though, resides in the folks who weren’t in Boulder: The founders. The company has been self-funded since 1998 and the founders are used to their control. I’ve seen companies fail because owners were unwilling to see that times have changed. They mistakenly think that pivoting markets says they did something wrong in the past, so they’re hesitant. It doesn’t say that, but only that the people have enough confidence to adapt to a new market with the same energy and intellect with which they addressed the original market.

JInfonet has great potential, but it will require a strong rethink and clarification of who they are in order to convert that to kinetic. From what I’ve seen of the product and two people, I hope they succeed.

AptiMap at BBBT: Improving Data Mapping

Today the BBBT held a special session. While most presentations are by companies with full products, existing sales and who typically have been around for a few years, today we had the pleasure of listening to Sherry Brown, President of AptiMap. This is a pure startup company, still tiny. She was looking for our always vocal analyst community’s opinions on her initial aim and direction. Not to surprise anyone who knows the BBBT, we gave that at full bore.

Ms. Brown’s goal is to provide a far easier way of mapping fields between source and target datasets for creating data warehouses and other data stores. It’s a great start and she has some initial features that will help. I’ll be blunt: I’m intentionally not going to say a lot. As mentioned, they are a very early startup and the software isn’t full fledged. That means any mention of what they have and don’t have could be inaccurate by next week. That’s not a bad thing, it’s what happens at that phase.

I will mention that the product is cloud based from the start.

The important question about whether or not to contact AptiMap is what who you are and what you need. Most of the feedback to Sherry was about that. It was helping to focus the message. If I have correctly understood the consensus of the attendees, here are the critical things to focus upon while defining a market for the initial product:

  • Aimed at IT and business analysts
  • Folks currently using modeling tools or spreadsheets at a start
  • Focus on standard, enterprise data sources, from spreadsheets to RDBMS’s, Hadoop can wait
  • Mid-sized companies integrating their first sets of systems or trying to get a handle on their existing data
  • Might especially be good in the hands of consultants going into those types of companies.
  • Many of the potential users are tablet users, so focus on that aspect of mobile

One final key, one that needs to be a full paragraph rather than a bullet and one that many technical startups don’t get while building their products based on user needs, is that users aren’t the only decision makers in the product. As mentioned, this is a cloud product and AptiMap will be expecting recurring revenue from monthly or annual fees. The business analyst is often not the person who approves those types of costs. The firm also needs to focus messages on the buyers, whether IT, line or consulting management, to build messages that help them understand the business benefit of providing the tool to their people.

Understanding your market matters. It will help the firm not only focus product, but also narrow down the marketing message and image to aim at the correct audience.

Too often, founders get a great technical idea and focus on a couple of users to fill out product features and then try to find a market. BI is moving too fast for that, the vision needs to be much more clearly set out much earlier than was needed in software companies twenty years ago.

Finally, I mentioned the cloud model but should also mention AptiMap is offering a 30-day free trial.

Summary

AptiMap has an initial product that can help people more rapidly and accurately create mappings between data sources and targets. It’s cloud based for easy access. It is, however, very early in the product and company life cycle.

I would suggest it primarily to analysts in mid-sized organizations or consultants who work with SMBs and want some quick hit functionality add to map data sources for the creation of data warehouses, ODS’s and other relationally oriented data repositories.

If you want to experiment inexpensively with an early product that could help, contact them.

MapR at BBBT: Supporting Hadoop and still learning

I’ve probably used this in other columns, but that’s life. MapR’s presentation to the BBBT reminded me of Yogi Berra’s statement that it feels like déjà vu all over again. Wait, if I think I’ve done this before, am I stuck in a déjà vu loop?

The presentation was a tag team effort of Steve Wooledge, VP Product Marketing, and Tomer Shiran, VP Product Management.

The Products and Their Aim

The first part of the déjà vu was good. People love to talk about freeware, but mission critical solution won’t be trusted on such. Even before Linux, before Unix, software came out and it took companies to package it with service and support to provide constancy and trust for widespread IT adoption. MapR is a key company doing that with Apache Hadoop, the primary open source technology for big data applications.

They’ve done the job well, putting together a strong company that, quite reasonably, has attracted some great investors and customers. Of course, because Hadoop is still in its infancy, even a leading company such as MapR only mentions 700 customer, companies paying for licenses; but that’s a statement about big data’s still fairly limited impact in operational systems not a knock on MapR.

Their vision statement is simple: “Empowering the As-it-happens business by speeding up the data-to-action cycle.” Note the key: Hadoop is batch oriented and all the players realize that real-time analysis matters for some key sales and marketing applications. Companies are now focusing on how fast they can get information out of the databases, not what it takes to get data in. A smart move but only half the equation.

One key part of the move to package open source into something trusted was pointed out by Steve Wooledge. When the company polled customers about why they chose MapR, the largest response was availability, the up time of the system. Better performance wasn’t far behind, but it’s clear that the company understands that availability is a critical business issue and they seem to be addressing it well.

Where the déjà vu hits in a not-so-positive way is the regular refrain of technologists still not quite getting business – even when they try. This isn’t a technology problem but an innovator’s problem. When you get so wrapped up in the cool things you’re doing, you think that you need to lead with the cool things, not necessarily what the market wants.

One example was when they were describing the complexity of the MapR packaging. Almost all the focus was on the cool buzzwords of open source. Almost lost in the mix was the mention that their software supports NFS. It was developed more than 30 years ago and helps find files on networks. That MapR helps link both the latest and the still voluminous data in existing file systems is a key point, something that can help businesses understand that Hadoop can be integrated into existing systems and infrastructure. However, it’s not cool so the information is buried.

The final thing I’ll mention about the existing products is that MapR has built a nice three product suite, providing open source, mid-tier and full enterprise versions. That’s the perfect way to address the open source conundrum and move folks along the customer curve.

Apache Drill: Has it Bitten Off Too Much?

Sorry, couldn’t help the drill bit reference. Tomer Shiran took the later part of the presentation to show off Apache’s latest data toy, Apache Drill, intended to bridge the two worlds of data. The problem I saw was one not limited to Tomer, MapR or even Apache, but to all folks with with what they think of as new technology: Over hype and an addiction to revolutionary rather than evolutionary words and messages. There were far too many phrases that denigrated IT and existing technology and implied Drill would replace things that weren’t needed. When questioned, Tomer admitted that it’s a compliment; but the unthinking words of many folks in the industry set out a pattern inimical to rapid adoption into the Global 1000’s critical information paths.

Backing up that was a reply given to one questioner: ““CIO of one of the largest tech companies said they can’t keep doing things the same way.” Tech companies tend to be bleeding edge by nature, they do not represent the fuller business world. More importantly, the idea that a CIO saying she needs to change doesn’t mean the CIO is planning on throwing out existing tools that work. It means she wants to expand and extend in a way to leverage all technology to provide better decision making capabilities to the rest of the CxO suite.

Another area of his talk finally brought forward, through a very robust discussion, of one terminology issue that many are having. Big data folks like to talk about “no schema” but that’s not really true. Even when they modify the statement to be “schema on read” it’s missing the point.

They seem to be confusing fixed layout, relational records with the theory of schemas. XML is a schema for data exchange. It’s very flexible and can be self-defined, but it’s a schema. As it came from SGML, it’s not even the first iteration of flexible schemas. The example Mr. Tomer gave was just like an XML schema. Both data source and data recipient have to know some basic information such as field names in order to make sense of data, so there’s a schema.

Flexible schemas not only aren’t new, they don’t obviate the need for flexible schemas. They’re just another technique for managing the wide variety of data that business wishes to turn into information. As long as big data folks misusing a term and acting as if they have something revolutionary, the longer they’ll retard their needed incursion into IT and business information.

Summary

Hadoop and big data aren’t going anywhere except forward. The question is at what speed. There are some great things happening in both the Apache open source world and MapR’s licensed support for that world, but the lack of understanding of existing IT and business is retarding adoption of the new and exciting technologies.

When statements such as “But the sales guy won’t do X” are used by folks who have never been in and don’t understand sales, they’re missing the market. Today’s sales person is looking for faster and more accurate information, and is using many tools people would have said the same thing about only ten years earlier. In the meantime, sales management and the CxO suite who provide guidance for the sales force are even more interested in big picture information coming from massaging large data sources.

The folks in the new arenas such as Hadoop need to realize that they are complementary to existing technologies and that can help both IT and business. When pointing that out, I was asked by one of the presenters if that meant he should do two case studies, one with Hadoop, flexible schema and one with old line uses, I gave a clear no. It should be one with new and one that shows new and existing data sources combining to give management a more holistic picture than previously possible.

Evolution is good. MapR can help. They need to do the tough part of technology and more their view from what they think is cool to what the market thinks is needed.