Rocket Software at BBBT: A Tale of Two Products

Last Friday’s BBBT presentation by an ensemble cast from Rocket Software was interesting, in both good and bad meanings of that word. They have some very interesting products that address the business intelligence (BI) industry, but they also have some confusion.

Bob Potter, SVP and GM, Business Intelligence, opened the presentation by pointing out that Rocket has more than $300 million (USD) in annual revenue yet many tech folks have never heard of them. One reason for the combination is they’ve done a good job in balancing both build and buy decisions to provide niche software solutions in a variety of places and on a number of platforms. Another is a strong mainframe focus. The third is that they don’t seem to know how to market. Let’s focus on just the two products presented to demonstrate all of these.

Rocket Data Virtualization

Most of the presentation was focused on Rocket Data Virtualization (DV). There are two issues it addressed. The first is accessing data from multiple sources without the need to first build a data warehouse. DV is the foundation of what was first thought of as the federated or virtual data warehouse. It’s useful. Gregg Willhoit, Managing Director, Research & Development, gave a good overview of DV and then delved into the product.

Rocket Data Virtualization is a mainframe resident product to enhance data virtualization, running on IBM z. While this has the clear market limit of requiring a company large enough to have a mainframe, it’s important to consider this. There are still vast amounts of applications running on mainframes and it’s not just old line Cobol. Mainframes run Unix, Linux and other OS partitions to leverage multiple applications.

An important point was brought up when Gregg was asked about access to the product. He said that Rocket is working with other BI industry partners, folks who provide visualization, so that they can access the virtualized data.

However, if you want to know more about the product, good luck. As I’ll discuss in more detail later, if you go to their site you’ll find all marcom fluff. It’s good marcom fluff, but driving deeper requires downloads or contacting sales people. That doesn’t help a complex enterprise sale.

Rocket Discover

The presentation was turned over to Doug Anderson, Solutions Engineer, for a look at their unreleased product Rocket Discover. It’s close, in beta, but it’s not yet out.

As the name implies, Rocket Discover is their version of a visualization tool. It’s a very good, basic tool that will compete well in the market except for two key things. The first is that they claimed Rocket is aiming at “high level executives” and that’s not the market. This is a product for business analysts. Second, while it has the full set of features that modern analysts will want, it’s based on a look and feel that’s at least a decade old.

On the very positive side, they do have a messaging feature built in to help with collaboration. It needs to grow, but this is a brand new product and they have seen where the market is going and are addressing it.

Another positive sign is this isn’t a mainframe product. It runs on servers (unspecified) and they’re starting with both on-premises and cloud options. This is a product that clearly is aimed at a wider market than they historically have addressed.

While they have understood the basics of the technology, the question is whether or not they understand the market. One teaser that shows that they probably don’t was brought up by another analyst who pointed out that Doug and others were often referring to the product as just Discover. Oracle has had a Discover product for many years. While Rocket might not have seen it on the mainframe, there will be some marketing issues if the company doesn’t always refer to the product as Rocket Discover, and they might have problems anyway. Their legal and marketing teams need to investigate quickly – before release.

Enterprise IT v Enterprise Software: Understanding the Difference

The product presentation and a Q&A session that covered more issues with even more folks from Rocket taking part, show the problems Rocket will have. As pointed out, the main reasons that so many people have never heard of Rocket is it sells very technical solutions to enterprise IT. Those are direct sales to a very technical audience. However, enterprise software is more than enterprise IT.

Enterprise software such as ERP, CRM, SFA and, yes, BI, address business issues with technology. That means there will be a complex sales cycle involving people from different organizations, a cycle that’s longer and more involved than a pure sale to IT. I’m not sure that Rocket has yet internalized that knowledge. As mentioned above, their website is very fluffy, as if the thought is that you put something pretty (though I argue against the current fad of multiple bands requiring scrolling, it’s neither pretty nor easy to use) with mission and message only, then you quickly get your techies talking directly to their techies, is the way you sell. Perhaps when talking with techies only, but not in an enterprise sale.

That’s my biggest gripe about the software industry not understanding the need for product marketing. You must be able to build a bridge to both technical and business users with a mix of collateral and content that span the gap. I’m not seeing that with Rocket.

In addition, consider the two products and the market. DV is very useful and there are multiple companies trying to provide the capability. While Rocket’s knowledge of and access to mainframe data is a clear advantage, the fact the product only runs on mainframes is a very limiting competitive message. I understand they have tied their horses very closely to IBM, and it makes sense to have a z option, but to not provide multiple platforms or a way for non-mainframe customers to use their more general concepts and technologies will retard growth.

If their plan is to provide what they know first then spread to other platforms, it’s a good strategy; but that wasn’t discussed.

Both products, though, have the same marketing issue. Rocket needs to show that it understands it is changing from selling almost exclusively to enterprise IT and needs to create a more integrated product marketing message to help sell to the enterprise.

There’s also the issue of how to balance the messages for the two products. For Rocket Data Virtualization to succeed, it really does need to work with the key BI vendors. Those companies will wonder about Rocket’s dedication to them while Rocket Discover exists. Providing a close relationship with those vendors will retard Rocket Discover’s growth. Pushing both products will be walking a tightrope and I haven’t seen any messaging that shows they know it.

Summary

Rocket is a company that is very strong on technology that helps enterprise IT. Both Rocket Data Virtualization and Rocket Discover have the basics in place for strong products. The piece missing is an understanding of how to message the wider enterprise market and even the mid- and small-size company markets.

Rocket Data Virtualization is the product that has the most immediate impact with the clear differentiation of very powerful access to mainframe data and the product I think should make the more rapid entrant into its space. The question is whether or not they can spread platform support past the mainframe faster than other companies will realize the importance of mainframe data. In the short term, however, they have a great message if they can figure out how to push it.

Rocket Discover is a very good start for a visualization tool, but primarily on the technology side. They need to figure out how to jump forward in GUI and into predictive and other analytics to be truly successful going forward, but the market is young and they have time.

The biggest issue is if Rocket will learn how to market and sell in broader enterprise and SMB sales, both to better address the multiple buyers in the sales cycle and to better communicate how both products interact in a complex market place.

Rocket is worth the look, they just need to learn how to provide the look to the full market.

TDWI Best Practices Report on Hadoop: A good report for IT, not executives

The latest TDWI Best Practices Report is concerned with Hadoop. Philip Russom is the author and the article is worth a read. However, it has the usual issue I’ve seen with many TDWI reports, very strong on numbers but missing the real business point. In journalism, there’s an expression called burying the lede, hiding the most important part of a story down in the middle. Mr. Russom gets his analysis correct, bit I think the priorities or the focus needs work. It’s a great report to use as a source by IT, it’s not a report for executives.

Why am I cranky? The report starts with an Executive Summary. The problem is that it isn’t aimed at executives but is something that lets technical folks think they’re doing well. It doesn’t tell executives why they should care. What are the business benefits? What are the risks? Those things are missing.

First, let’s deal with the humorous marketing number. The report mentions the supposedly astounding figure that “Hadoop clusters in production are up 60% in two years.” That’s part of the executive summary. You have to slide down into the body to understand that only 16% of respondents said they have HDFS production. It’s easy for early adopters to grow a small percent to a slightly larger small percentage, it’s much tougher to get a larger slice of the pie.

Philip Russom accurately deals with why it will take a bit for Hadoop to grow larger, but it does it past the halfway point of the article. Two things: Security and SQL.

Executives are concerned that technology helps business. Security ensures that intellectual property remains within the firm. It also ensures that litigation is minimized by not having breaches that could be outside regulatory and contractual requirements. Mr. Russom accurately discusses the security risks with Hadoop, but that begins down on page 18 and doesn’t bubble up into the executive summary.

So too is the issue of SQL. After writing about the problems in staffing Hadoop, the author gives a brief but accurate mention of the need to link Hadoop into the rest of a business’ information infrastructure. It is happening, as a sidebar comment points out with “Hadoop is progressively integrated into complex multi-platform environments.” However, that progress needs to speed up for executives to see the analytics from Hadoop data integrated into the big picture the CxO suite demands.

The report gives IT a great picture of where Hadoop is right now. As expected from a technical organization, it weighs the need, influence and future of the mystical data scientist too highly, but the generalities are there to help mid-level management understand where Hadoop is today.

However, I’ve seen multiple generations of technology come in, and Hadoop is still at an early adopter phase where too many proponents are too technical to understand what executives need. It’s important to understand risks and rewards, not a technical snapshot; and the later is what the report is.

IT should read this report as valuable insight to what the market is doing. It’s, obviously, my personal bias, but the summary is just that, a summary. It’s not for executives. It’s something that each IT manager will use for its good resources to build their own messages to their executives.

JInfonet at the BBBT: OEM or Direct, a Decision is Necessary

Let’s cut to the chase, this is another company with a very good product and no idea how to message. Unless they quickly figure out and communicate the right message, they’ll need to get ready for acquisition as an exit strategy.

Jinfonet is a company founded, it seems, to clone Crystal Reports in Java. Hence the awkward name. JReport, their product, is full featured and we’ll get to that, but the legacy name using report will leave them behind if that remains their focus.

The presentation was primarily by Dean Yao, Director of Marketing, with demo support brought by the able Leo Zhao, Senior Systems Consultant. However, the presentation indicated the message problem.

Reports? What Reports?

The name of the product is JReports, but at no time in the three hours did a report make an appearance. They showed two different analyst charts, Nucleus Research and EMA, of the business intelligence (BI) industry to show where they were placed. BI. Yet when asked about competition, Dean Yao repeatedly mentioned they didn’t compete against BI vendors but focused on reports.

Their own presentation begs to differ:

JReport solution areas

Notice that reports are a secondary feature of one focus.

What’s also good and bad is that Leo Zhao’s demonstrations showed a very richly featured product that does compete against the other vendors. The only major hole wasn’t in functionality, it’s that the rich set of visualizations weren’t as pretty as most of the competition. That is in part because they are self-funded with more limited resources and partly because they’re great techies who haven’t prioritized visualizations as they should.

OEM or Direct?

OEM, in JInfonet’s business model, doesn’t only mean the product embedded in third party applications. Mr. Yao discussed how JReport is also regularly embedded in departmental IT applications. That is different than when companies use JReport as a standalone product.

Dean talked about how 30% of their business in recent years was direct, with the rest being OEM. At the same time, he mentioned that last year was around 50/50. That’s not a problem. What is an issue is that they don’t know why it was. Did sales focus on direct? Was one major direct client a large revenue outlier which skewed the results? They don’t seem to know.

That matters because the OEM and direct models are very different. With OEM, you let the other company deal with business messages. All you’re doing is presenting to them a good technical story and cost point compared do simpler products, a tiny segment of competition or doing nothing and losing out to their competitors.

Enterprise sales, on the other hand, require a focus on the end user, the folks using the products and the business issues they have. That is what’s missing from the presentation, their web site and the few pieces of collateral I reviewed.

One thing should also be said about the OEM to departments model. The cloud is changing the build v buy balance for many departments for the applications in which JReport is embedded, so I’m not sure how much longer this model will be of significant revenue.

Mr. Yao said they don’t do enterprise sales, but just sell to SMB and enterprise departments, so that means they’re not really competing against other BI vendors. A lot of the analysts on the call quickly jumped on that, pointed out that even one of the largest companies openly talks about its strategy of land and expand. “Just land” is not a long term strategy.

What’s that mean?

Right now the enterprise market is very fragmented, so there’s a space for a small company, but that won’t last long. Crystal Reports had a long run based on the technologies of the day, but it no longer is independent. Today, things are changing far more rapidly. The cloud is allowing BI firms to address small to global companies with similar products and the major players (and most smaller ones) are focused on that full business market.

Given the current product, JInfonet can go one of two ways. They can decide to completely focus on OEM, keep a technical message and just sell enterprise as it happens.

The other option, one I openly prefer, is that they realize that they have a very good product that does compete in the direct model and they need to focus more messaging. They can still provide to OEM, but that’s easier – it’s a subset of the full featured message.

The solution, though, resides in the folks who weren’t in Boulder: The founders. The company has been self-funded since 1998 and the founders are used to their control. I’ve seen companies fail because owners were unwilling to see that times have changed. They mistakenly think that pivoting markets says they did something wrong in the past, so they’re hesitant. It doesn’t say that, but only that the people have enough confidence to adapt to a new market with the same energy and intellect with which they addressed the original market.

JInfonet has great potential, but it will require a strong rethink and clarification of who they are in order to convert that to kinetic. From what I’ve seen of the product and two people, I hope they succeed.

AptiMap at BBBT: Improving Data Mapping

Today the BBBT held a special session. While most presentations are by companies with full products, existing sales and who typically have been around for a few years, today we had the pleasure of listening to Sherry Brown, President of AptiMap. This is a pure startup company, still tiny. She was looking for our always vocal analyst community’s opinions on her initial aim and direction. Not to surprise anyone who knows the BBBT, we gave that at full bore.

Ms. Brown’s goal is to provide a far easier way of mapping fields between source and target datasets for creating data warehouses and other data stores. It’s a great start and she has some initial features that will help. I’ll be blunt: I’m intentionally not going to say a lot. As mentioned, they are a very early startup and the software isn’t full fledged. That means any mention of what they have and don’t have could be inaccurate by next week. That’s not a bad thing, it’s what happens at that phase.

I will mention that the product is cloud based from the start.

The important question about whether or not to contact AptiMap is what who you are and what you need. Most of the feedback to Sherry was about that. It was helping to focus the message. If I have correctly understood the consensus of the attendees, here are the critical things to focus upon while defining a market for the initial product:

  • Aimed at IT and business analysts
  • Folks currently using modeling tools or spreadsheets at a start
  • Focus on standard, enterprise data sources, from spreadsheets to RDBMS’s, Hadoop can wait
  • Mid-sized companies integrating their first sets of systems or trying to get a handle on their existing data
  • Might especially be good in the hands of consultants going into those types of companies.
  • Many of the potential users are tablet users, so focus on that aspect of mobile

One final key, one that needs to be a full paragraph rather than a bullet and one that many technical startups don’t get while building their products based on user needs, is that users aren’t the only decision makers in the product. As mentioned, this is a cloud product and AptiMap will be expecting recurring revenue from monthly or annual fees. The business analyst is often not the person who approves those types of costs. The firm also needs to focus messages on the buyers, whether IT, line or consulting management, to build messages that help them understand the business benefit of providing the tool to their people.

Understanding your market matters. It will help the firm not only focus product, but also narrow down the marketing message and image to aim at the correct audience.

Too often, founders get a great technical idea and focus on a couple of users to fill out product features and then try to find a market. BI is moving too fast for that, the vision needs to be much more clearly set out much earlier than was needed in software companies twenty years ago.

Finally, I mentioned the cloud model but should also mention AptiMap is offering a 30-day free trial.

Summary

AptiMap has an initial product that can help people more rapidly and accurately create mappings between data sources and targets. It’s cloud based for easy access. It is, however, very early in the product and company life cycle.

I would suggest it primarily to analysts in mid-sized organizations or consultants who work with SMBs and want some quick hit functionality add to map data sources for the creation of data warehouses, ODS’s and other relationally oriented data repositories.

If you want to experiment inexpensively with an early product that could help, contact them.

Trifacta at the BBBT: Better Access and Understanding of Raw Hadoop Data

Trifacta is another business intelligence company to enter the horse race (yes, I know that reference is spelled differently…). They are focused on providing an early look at data coming out of Hadoop, to create some initial form and and intelligences for business use.

Last Friday, Trifacta was the presenting company at the BBBT. Their representatives were Adam Wilson, CEO, Michael Hiskey, Interim Marketing Lead, and Wei Zheng, VP Products. The presenters were there to discuss the company’s position in data wrangling. While some folks had problems with the term, as Michael Hiskey pointed out, it as term that they didn’t invent. Me? I think it makes more sense than another phrase our industry uses, data lake; but that’s another topic.

Simply put, Trifacta is working to more easily provide a view into Hadoop data by using intelligence to better understand and suggest field breaks, layouts and formats, to help users clean and refine the data in order for it to become useable information for analysis.

Michael and the others talk about self-service data preparation, implying end, business user involvement. The problem is that they’re messaging far ahead of the product. They, as lots of other companies are also doing, try too hard to imply an ease of use that isn’t there. Their users are analysts, IT or business. The product is important and useful, but it’s important to be clear about to whom it is useful. (Read more about self-service issues).

The Demo

While Michael Mr. Hiskey and Mr. Wilson gave the introduction, the meat was in Ms. Zheng’s presentation. As a guy who has spent years in product marketing, I have a bit of a love hate relationship with product management. Have had some great ones and very poor ones – and I’m sure the views of me also spread that spectrum. I’ll openly say that Wei Zheng is the most impressive example of a VP of Products I’ve heard in a long time. She not only knows the products, she was very clear about understanding the market and working to bridge that to development. How could any product marketer not be impressed? Her demo was a great mix between product and discussions about both current usage and future strategy.

One of the keys to the product Wei Zheng pointed out is that the work Trifata is doing does not include moving the data. It doesn’t update the data, it works by managing metadata that describes both data and transformations. Yes, I said the word. Transformations. Think of Trifacta as simplified ETL for Hadoop, but with a focus on the E & T.

The Trifacta platform reads the Hadoop data, sampling from the full source, and uses analysis to suggest field breaks. Wei used a csv file for her demo, so I can’t speak to what mileage you’ll experience with Hadoop data, but the logic seems clear. As someone who fifteen years ago worked for a company that was analyzing row data without delimiters to find fields, I know it’s possible to get close through automation. If you’re interesting, you should definitely talk with them and have them show you their platform working with your data.

The product then displays a lot of detail about the overall data and the fields. It’s very useful information but, again, it’s going to be far more useful to a data analyst than to a business user.

Trifacta also has some basic data cleansing functions, such as setting groups for slightly different variations of the same customer company name and then changing them to something consistent. Remember, this is done in the metadata; the original data remains the same. You can review the data and the cleaned data will show, but the original remains until you formally export to a clean data file.

Finally, as the demonstration clearly shows, they aren’t trying to become a BI visualization firm. They are focused on understanding, organizing and cleaning the data before analysis can be done. They partner with visualization vendors for the end-user analytics.

Summary

Trifacta has a nifty little product for better understanding, cleaning and providing Hadoop data. Analysts should love it. The problem is that, unlike what their presentation implies, Hadoop does not equal big data. They have nothing that helps link Hadoop into the wider enterprise data market. They are a very useful tool for Hadoop, but unless they quickly move past that, other vendors are already looking at how to make sense of the full enterprise data world. They seem to have a great start in a product and, from my limited exposure to three people, a very good team. If you need help leveraging your Hadoop data, talk with them.

Tableau at the BBBT: Strengthening the Business in Business Intelligence

Tableau was back at the BBBT last week. Last year’s presentation was a look ahead at v8.2. The latest visit was a look back at 2014 and a focus on v9.0. Francois Ajenstat, VP Product Management, was back again to lead us through product issues. The latest marketing presenter was Adriana Gil Miner, VP Corporate Communications.

Tableau Revenues

Ms. Gil Miner opened the morning with the look back at last year. The key point was thestrength of their growth. They are not only pleased with the year-over year growth, but thechart also shows last year’s revenue as a slice of revenue over Tableau’s lifetime. We’ll leave it simply as: They had a good year.

Another point in describing their size is that Adriana said they have 26,000 customer accounts. Some confusion with a later presentation number required clarification and this isn’t users, or even sites. We were told that the 26k is the number of paying company accounts. There were no numbers showing median account size or how far the outliers are on either extreme, but that’s a nice number for the BI space.

The final key point made by Adriana Gil Miner was localization. Modern companies almost all create products using unicode or other methods that allow for language localization, but Tableaus has made the strong push to provide localized software and data sets in multiple languages. My apologies for not listing them, there’s some weird glitch with my Adobe Reader that’s crashing only on their presentation while no other analyst is having the same problem, so I can’t provide a list. Please refer to your local Tableau rep for details.

Francois Ajenstat then took over. It was no surprise that his focus was on v9.0. He discussed it by focusing on nine points he views as key:

  • Access to more data sources
  • Answer more questions
  • Improve the user experience
  • Support analytics at scale
  • Performance as a differentiator
  • Support for mobile
  • Tableau Public redesign
  • Coming out next quarter

If you look at those, you might question why that many bullets? For instance, when it comes out is just a schedule issue and doesn’t rise to the level of the others. Tableau Public’s redesign just seems to be the obvious end of focusing on better user experience and performance.

However, a couple sound the same but should be differentiated. Analytics at scale and performance improvements overlap but aren’t identical. Francois showed both what they did to improve performance on clustered servers, helping both bigger data sources and more simultaneous users, and also demonstrating that they’ve done some great optimization in basic analytics for individuals.

One of the best parts was the honesty, now that they’re close enough to releasing v9.0, in admitting that early versions ran slowly. They showed quotes from beta testers talking about major performance improvements. In addition, Tableau Public is a great source of testing real-word analytics. Mr. Ajenstat pointed out that they took the 100 most accessed visualizations in Tableau Public and analyzed performand differences, seeing a 4x increase in performance on average. While it’s always important to generate internal tests to stress potential use, focusing on how business really use the tool is even more important in ensuring performance is seen as good in day-to-day usage by knowledge workers, not only in heavy loads by analysts doing discovery.

LOD Expressions

The one thing that really caught my eye about v9.0 is the incorporation of Level of Detail (LOD) expressions. BI firms have been adding drill-down analytics for a decade. Seeing a specific level of detail and then dropping down to a lower level is critical. However, that’s not enough.

What’s needed is to be able to visually compare the lower level details with overall numbers. For instance, a sales VP regularly wants to know not just how an individual sales person is doing, but also how that compares to the region and national numbers. Only within context can you gain insight.

Among the other things LODs help is the ability to bin aggregates. Again we can turn to sales to think about retail sales across categories while also comparing those to total sales or in a trend analysis.

While many companies are working to add more complex analysis, it’s clear that Tableau hasn’t only looked at how a very technical person can create an LOD. They’ve worked on an interface, that from the demo, has a simple and clean interface that business end users can user. Admittedly, that’s what demos are supposed to do, but I’ve seen some try and fail miserably. This seems to be a good attempt to understand business intelligence with an emphasis of the first word.

Summary

Some of the very new startups make the mistake of thinking even the first generation BI companies are too old to innovate. Those companies aren’t and are still a threat. However, Tableau is not even in the first generation and is still more nimble yet. They have their eyes on the ball and are moving forward. Even more importantly, while still focusing on their technology, as do many startups, they seem to have become mature enough to start shifting focus from the IT and business analysts to the information consumers.

Understanding what the business knowledge workers throughout the business hierarchy need, in data and performance, is what will drive the next growth spurt. Tableau seems to have them in target.

IBM and the Cloud? Don’t write it off

At today’s investor meeting, IBM execs announced a target of $40 billion in revenue for cloud, analytics, mobile, social and security software by 2018. I’ve expect to see folks talk about dinosaurs not being able to turn fast enough and predicting failure to meet that goal. I don’t know if they can do it, but to make such ardent predictions you’d have to ignore history.

Mid-sized Unix servers came along and folks talked about IBM going away.

IBM blew a chance to own PC industry and the same predictions followed them.

Linux? Freeware was going to destroy the mainframe. Oops, Linux partitions run on mainframes.

Now we know the large growth of the cloud. Much of it has been on commodity boxes. However, as data gets larger, analytics more powerful and networks become more robust, there’s clearly space for a company with such a strong history in hardware, services and adapting to changes.

After all, too many people still think of IBM as a hardware company. While it’s too early for the 2014 report, you can check the 2013 Annual Report and check page 7. Look at what a tiny percentage of the bar is hardware. Software and services are fairly even in splitting the vast majority of the revenue stream.

It’s a strong goal and will take a lot of pushing. How many politely phrased “re-orgs” will happen to lay off staff? Who knows? Will they succeed? No clue. All I expect is that they’ll continue to grow and nobody should count them out.