Category Archives: Marketing

TDWI and IBM on Predictive Analytics: A Tale of Two Focii

Usually I’m more impressed with the TDWI half of a sponsored webinar than by the corporate presentation. Today, that wasn’t the case. The subject was supposed to be about predictive analytics, but the usually clear and focused Fern Halper, TDWI Research Director for Advanced Analytics, wasn’t at her best.

Let’s start with her definition of predictive analytics: “A statistical or data mining solution consisting of algorithms and techniques that can be used on both structured and unstructured data to determine outcomes.” Data mining uses statistical analysis so I’m not quite sure why that needs to be mentioned. However, the bigger problem is at the other end of the definition. Predictive analysis can’t determine outcomes but it can suggest likely outcomes. The word “determine” is much to forceful to honestly describe prediction.

Ms. Halper’s presentation also, disappointingly compared to her usual focus, was primarily off topic. It dealt with the basics of current business intelligence. There was useful information, such as her referring to Dave Stodder’s numbers showing that only 31% of surveyed folks say their businesses have BI accessible to more than half their employees. The industry is growing, but slowly.

Then, when first turning to predictive analytics, Fern showed results of a survey question about who would be building predictive analytics. As she also mentioned it was a survey of people already doing it, there’s no surprise that business analysts and statisticians, the people doing it now, were the folks they felt would continue to do it. However, as the BI vendors including better analytics and other UI tools, it’s clear that predictive analytics will slowly move into the hands of the business knowledge worker just as other types of reporting have.

The key point of interest in her section of the presentation was the same I’ve been hearing from more and more vendors in recent months: The final admission that, yes, there are two different categories of folks using BI. There are the technical folks creating the links to sources, complex algorithms and reports and such, and there are the consumers, the business people who might build simple reports and tweak others but whose primary goal is to be able to make better business decisions.

This is where we turn to David Clement, Product Marketing Manager, BI & Predictive Analytics, IBM, the second presenter.

One of the first things out of the gate was that IBM doesn’t talk about predictive analytics but about forward looking business intelligence. While the first thought might be that we really don’t need yet another term, another way to build a new acronym, the phrase has some interesting meaning. It’s no surprise that a new industry where most companies are run by techies focused on technology, the analytics are the focus. However, why do analytics? This isn’t new. Companies don’t look at historic data for purely nostalgic reasons. Managers have always tried to make predictions based on history in order to better future performance. IBM’s turn of phrase puts the emphasis on forward looking, not how that forward look is aided.

The middle of his presentation was the typical dog and pony show with canned videos to show SPSS and IBM Cognos working together to provide forecasting. As with most demos, I didn’t really care.

What was interesting was the case study they discussed, apparel designer Elie Tahari. It’s a case study that should be studied by any retail company looking at predictive analytics as a 30% reduction of logistics costs is an eye catcher. What wasn’t clear is if that amount was from a starting point of zero BI or just adding predictive analytics on top of existing information.

What is clear is that IBM, a dinosaur in the eyes of most people in Silicon Valley and Boston, understands that businesses want BI and predictive analytics not because it’s cool or complex or anything else they often discuss – it’s to solve real business problems. That’s the message and IBM gets it. Folks tend to forget just how many years dinosaurs roamed the earth. While the younger BI companies are moving faster in technology, getting the ears of business people and building a solution that’s useful to them matters.

Summary

Fern Halper did a nice review of the basics about BI, but I think the TDWI view of predictive analytics is too much industry group think. It’s still aligned with technology as the focus, not the needs of business. IBM is pushing a message that matters to business, showing that it’s the business results that drive technology.

Businesses have been doing predictive analysis for a long time, as long as there’s been business. The advent of predictive analytics is just a continuance of the march of software to increase access to business information and improve the ability for business management to make timely and accurate decisions in the market place. The sooner the BI industry realize this and start focusing less on just how cool data scientists are and more on how cool it is for business to improve performance, the faster adoption of the technology will pick up.

Splunk at BBBT: Messages Need to Evolve Too

Our presenters last Friday at the BBBT were Brett Sheppard and Manish Jiandani from Splunk. The company was founded on understanding machine data and the presentation was full of that phrase and focus. However, machine data has a specific meaning and that’s not what Splunk does today. They speak about operational intelligence but the message needs to bubble up and take over.

Splunk has been public since 2012 and has over 1200 employees, something not many people realize. They were founded in 2004 to address the growing amount of machine data and the main goal the presenters showed is to “Make machine data accessible, usable and valuable to everyone.”

However, their presentation focused on Splunk’s ability to access IVR (Interactive Voice Recorder) and twitter transcripts and that’s not machine data. When questioned, they pointed out that they don’t do semantic analysis but focus on the timestamp and other machine generated data to understand operational flow. Still, while you might stretch and call that machine data, they did display doing some very simple analytics on the occurrence of keywords in text and that’s not it.

It’s clear that Splunk has successfully moved past pure machine data into a more robust operational intelligence solution. However, being techies from the Bay Area, it seems they still have their focus on the technology and its origins. They’re now pulling information from sources other than just machines, but are primarily analyzing the context of that information. As Suzanne Hoffman (@revenuemaven), another BBBT member analyst, pointed out during the presentation, they’re focused on the metadata associated with operational data and how to use that metadata to better understand operational processes.

Their demo was typical, nothing great but all the pieces there. The visualizations are simple and clear while they claim to be accessible to BI vendors for better analytics. However, note that they have a proprietary database and provide access through ODBC and an API. Mileage may vary.

There was also a confusing message in the claim that they’re not optimized for structured data. Machine data is structured. While it often doesn’t have clear field boundaries, there’s a clear structure and simple parsing lets you know what the fields and data are in the stream. What they really mean is it’s not optimal for RDBMS data. They suggest that you integrate Splunk and relational data downstream via a BI tool. That makes sense, but again they need to clarify and expose that information in a better way.

And then there’s the messaging nit. While talking about business as my main focus, technology presented with the incorrect words jars the educated audience. Splunk is not the first company nor will it, sadly, be the last, to have people who are confused about the difference between “premise” and “premises.” However, usually it’s only one person in a presentation. The slides and both presenters showed a corporate confusion that leads me to the premise that they’re not aware of how to properly present the difference between Cloud and on-premises solutions.

Hunk: On the Hadoop Bandwagon

Another messaging issue was the repeated mention of Hunk without an explanation. Only later in the presentation, they focused on it. Hunk’s their product to put the Splunk Enterprise technology on a Hadoop database. Let me be clear, it’s not just accessing Hadoop information for analysis but moving the storage from their proprietary system to Hadoop.

This is a smart move and helps address those customers who are heavily invested in Hadoop and, at least at the presentation level, they have a strong message about having the same functionality as in their core product, just residing on a different technology.

Note that this is not just helping the customer, it helps Splunk scale their own database in order to reach a wider range of customers. It’s a smart business move.

Security, Call Centers and Changing the Focus

The focus of their business message and a large group of customer slides is, no surprise, on network security and call center performance. The ability to look at the large amount of data and provide analysis of security anomalies means that Splunk is in the Gartner Magic Quadrant for SIEM (Security Information and Event Management).

In addition, IVR was mentioned earlier. That combined with other call center data allows Splunk to provide information that helps companies better understand and improve call center effectiveness. It’s a nice bridge from pure machine data to a more full featured data analysis.

That difference was shown by what I thought was the most enlightening customer slide, one about Tesco. For my primarily US readers, Tesco is a major grocery chain, with divisions focused on everything from the corner market to supermarkets. They are headquartered in England, are the major player in Europe and the second largest retailer by profit after Walmart.

As described, Tesco began using Splunk to analyze network and website performance, focused on the purely machine data concerns for performance. As they saw the benefit of the product to more areas, they expanded to customer revenue, online shopping cart data and other higher level business functions for analysis and improvement.

Summary

Splunk is a robust and growing company focused on providing operational intelligence. Unfortunately, their messaging is lagging their business. They still focus on machine data as the core message because that was their technical and business focus in the last decade. I have no doubts that they’ll keep growing, but a better clarification of their strategy, priorities and messages will help a wider market more quickly understand their benefits.

Datawatch at BBBT: Another contender and another question of message

Yesterday’s presentation to the BBBT was by Datawatch personnel Ben Plummer, CMO, and Jon Pilkington, VP Products. As they readily admit, they’re a company with a long history about which most people in the industry have never heard. They were founded in the 1980s and went public in the 1990s. Their focus is data visualization, but much of their business has been reseller and OEM agreements with companies including SAP, IBM and Tibco.

The core of their past success was with basic presentation of flat file information through their Monarch product. It was only with the acquisition of and initial integration with Panopticon in 2013, providing access to far more unstructured data that they rebranded as data visualization and began to push strongly into the BI space.

The demo was very standard. Everyone wants to show their design interface and how easy it is to build dashboards. Their demonstration was in the middle of the pack. The issue I had was the messaging. It’s no surprise that everyone claiming to be a visualization company needs to show visualization, but if you’re not one of the very flashy companies, your message about building your visualization should be different.

Datawatch’s strengths seem to be two-fold:

  • Access a very wide variety of data sources.
  • Access in motion data.
  • Full service from data access to presentation.

While Ben’s presentation talked about the importance of the Internet of Things and that real-time data is transactional, Jon’s presentation didn’t support those points. Datawatch is another company working to integrate structured and non-structured data and they seem to have a good focus on real-time, those need to be messages throughout their marketing, and that means in the demo.

Back from that tangent to the mainline. The third point is a major key. Major ETL and data warehouse vendors aren’t going away, but for basic BI, it adds costs and time to have to look at both and ETL and a data visualization tool which may not work together as the demoware indicates (A surprise, I know…). The companies who can get the full stream data supply chain from source to visualization can much more quickly and affordably add value for the business managers wanted better BI. I know it’s a fine line in messaging that and still working with vendors who overlap somewhat, but that’s why Coopetition was coined.

They seem to have a good vision but they haven’t worked to create a consistent and differentiated message. That could be because of resources and hopefully that will change. In February of this year Datawatch issued a common stock offering that netted them more cash. Hopefully some of that will be spent to focus on created strong and consistent marketing. That also includes such simple things as changing press releases to be visible from the PR link as html, not just pdfs.

Summary

I know you’re getting tired of hearing the following refrain, but here it is again. The issue is that I’ve heard this message before. The market is getting crowded with companies trying to support modern BI that’s a blend of structured and unstructured data. Technologists love to tweak products and think that minor, or even major technical issues that aren’t visibly relevant to the market should sell the product all by themselves. Just throw some key market points on top of them and claim you have no competitors because your technology is so cool.

BI and big data are cool right now and there are a large number of firms attempting to fill a need. Datawatch seems to have the foundations for a good, integrated platform from heterogeneous data access to visual presentation of actionable information. That message needs to quickly become stronger and clearer. This is a race. Being in shape isn’t enough, you have to have the right strategy and tactics to win the race. Datawatch has a chance, will they stumble or end up on the podium?

Cloudera at the BBBT: The limits of Open Source as a business model

Way back, in the dawn of time, there were ATT and BSD, with multiple flavors of each base type of Unix. A few years later, there were only Sun, IBM and HP. In a later era, there was this thing called Linux. Lots of folks took the core version, but then there were only Redhat and a few others.

What lessons can the Hadoop market learn from that? Mission critical software does not run on freeware. While open source lowers infrastructure costs and can, in some ways, speed feature enhancements, companies are willing to pay for knowledge, stability and support. Vendors able to wrap the core of open source up in services to provide the rest make money and speed the adoption of open-source based solutions. Mission critical applications run on services agreements.

It’s important to understand that distinction when discussing such interesting companies as Cloudera, whose team presented at last Friday’s BBBT session. The company recently received a well-publicized, enormous investment based on the promise that it can create a revenue stream for a database service based on Hadoop.

The team had a good presentation, with Alan Saldich, VP Marketing, pointing out that large, distributed processing databases are providing a change from “bringing data to compute” to “bringing compute to data.” He further defined the Enterprise Data Hub (EDH) as the data repository that is created in such an environment.

Plenty of others can blog in detail about what we heard about the technology, but I’ll give it only a high level glance. The Cloudera presenters were very open about their product being an early generation and they laid out a vision that seemed to be good. They understand their advantages are the benefits of Cloud and Hadoop (discussed a little more below) but that the Open Source community is lagging in areas such as access and control to data. It’s providing such key needs to IT that will help their adoption and provide a revenue stream, and their knowing that is a good sign.

I want to spend more time addressing the business and marketing models. Cloudera does seem to be struggling to figure out how to make money, hence the need more such a large investment from Intel. Additional proof is the internal confusion of Alan saying they don’t report revenues and then showing us only bookings, while Charles Zedlewski, VP Products, had a slide claiming they’re leading their market in revenue. Really? Then show us.

They do have one advantage, the Cloud model lends itself to a pricing model based on nodes and, as Charles pointed out, that’s a ““business model that’s inherently deflationary” for the customer.  Nodes get more powerful so the customers regularly get more bang for the buck.

On the other side, I don’t know that management understands that they’re just providing a new technology, not a new data philosophy. While some parts of the presentation made clear that Cloudera doesn’t replace other data repositories except for the operational data store, different parts implied it would subsume others without giving a clear picture of how.

A very good point was the partnerships they’re making with BI vendors to help speed integration and access of their solution into the BI ecosystem.

One other confusion that Cloudera, and the market as a whole, seems to be clearly differentiating that the benefits of Hadoop come from multiple technologies: Both the software that helps better manage unstructured data and simple hardware/OS combination that comes from massively parallel processing, whether the servers are in the Cloud or inside a corporate firewall. Much as what was said about Hadoop had to do with the second issue, and so the presenters rightfully got pushback from analysts who saw that RDBMS technologies can benefit from those same things and therefore minimizing that as a differentiator.

Charles did cover an important area of both market need and Cloudera vision: Operational analytics. The ability to quickly massage and understand massive amounts of operational information to better understand processes is something that will be enhanced by the vendor’s ability to manage large datasets. The fact that they understand the importance of those analytics is a good sign for corporate vision and planning.

Open source is important, but it’s often overblown by those new to the industry or within the Open Source community. Enterprise IT knows better, as it has proved in the past. Cloudera is a the right place at the right time, with a great early product, the understanding as to many of the issues that are needed in the short term. The questions are only about the ability to execute both on the messaging and programming sides. Will their products meet the long term needs of business critical applications and will they be able to explain clearly how they can do so? If they can answer correctly, the company will join the names mentioned at the start.

Qlik at BBBT. QlikView Expressor: Come for the ETL, stay for the Data Governance

Last Friday’s BBBT presentation was by Qlik and the primary purpose was to discuss QlikView Expressor, but that was just a foundation for what really caught my eye, a great advance in data governance.

Qlik bought Expressor Software last June and the presentation was the chance to show the re-branded products to the analysts in the group. Expressor bring baby ETL to Qlik. The presenters, Donald Farmer and Bill Kehoe were very honest and clear that the product is intended for those who start with basic self-service BI and find they need to get to multiple sources as they begin to expand their use. I’ll be clear, this is a baby product. Their public case study was, according to the slides, using ODBC and Salesforce.com’s very open API. How they can, and even if they should, handle access to more complex and proprietary systems remains a big question.

As Informatica and other major players in the ETL space have strong partnerships with Qlik, it’s a careful game Qlik has to play. On one side, they have to provide some basic ETL functionality to a key portion of their market, on the other side they have to not alienate the big players. Often products acquired in such a middle ground either fail from the lack of a clear solution or cause problems with partnerships, but only time will tell how Qlik will handle this. For the time being, I don’t see this product being a threat to their partners.

The presenters waffled the early message about ETL being a way to govern access to data and why became very clear as the presentation entered the second section. QlikView Expressor is being used as a component driving Qlik’s new QlikView Data Governance Dashboard. The company has done an amazing job at blending ETL, their existing and well known BI presentation software, and a smart overview of the full architecture to take a very good step forward in helping companies understand where their data is being used.

As Donald Farmer pointed out, only half humorously, “Microsoft Office for the iPad has killed data governance.” KPIs defined in multiple departments, different reports on different computers and the growth of laptops made data governance difficult in the previous decades. The boom in tablet use has expanded that challenge exponentially. Having the leading business productivity suite now available on the leading tablets means company reports, spreadsheets and more are spread even further through the business ecosystem. Data governance becomes vastly more difficult to achieve.

The Data Governance Dashboard is a first step in helping IT and business users understand what information is out there, where it’s residing and how much of it is used how often.

This isn’t a blog about the features, but one must be mentioned because it is critical. Knowing what data fields are being accessed by what BI reports is important by itself, and Qlik and others have been looking at that. The extension that matters is their reports that begin to link the report labels users see to the internal fields. Think about the ability to see that two divisions both use the label “Gross Profit” and then understand that they’re using different fields, definitions and underlying data to create the displayed numbers.

Self-service BI is reaching the point where desktop computers were in the early 1990s. The 1980s saw business users run away from what they saw as a controlling IT infrastructure. It helped and it created confusion. IT and business had to find new ways to work together, reintegrate and help both sides help the company. The Governance Dashboard is something that can help lead the BI industry into that same step to help both sides provide improved and consistent information to business decision makers. Well done.

Salesforce1 and “The Internet of Customers”

This is the second part of a series on the Salesforce1 road show held in Philadelphia on March 6, 2014.

One key point repeated throughout the road show event was that behind every device in The Internet of Things is a customer. Good point. The question is if Salesforce is beginning to truly address that and not just prospects.

In the early 1990s, I worked at Aurum Software, one of the companies that created the market. In those days, all the companies, Scopus, Vantive and a few others were trying to address the full customer interface. We had a single database and built early SFA, CRM and customer support applications on top of the database. Unfortunately, neither the hardware or software allowed for both quick development and the number of users supported from all the branches. In addition, since the areas were all pretty new, most prospects wanted to start with one of the three applications. That caused the three branches to split up and they’ve been separate for most of the last twenty years.

Salesforce is one of the companies working to bring it all together again. As one of the oldest and most successful of the SaaS companies, they’ve been focused on the power of the cloud and how to expand. In the road show, they did a good job of showing SFA, CRM and customer support applications working in concert to benefit companies wanting to understand the true picture of their interactions with their customers.

The company has acquired ExactTarget and Pardot to provide the confusing pair of automated marketing and marketing automation. ExactTarget provides list management, schedule and automated distribution of emails (automated marketing). Pardot helps marketing manage campaigns and lead through those campaigns (marketing automation). I was very impressed by both the Pardot software and team. The software works well on its own and has tight integration with Salesforce SFA, though there’s still work to do in order to show good closed-loop reporting.

On the support side, Salesforce advertises their Service Cloud, supposedly powered by Desk.com. I’m not as impressed by this as I am the CRM, but the strategy is in place. In addition, having designed for the internet, SaaS has a good partner ecosystem with a number of products and SI’s at the roadshow to help add service to the other customer facing systems.

The technology is finally in place to build integrated systems that can give management a view of all their customer interactions, and Salesforce has the strategy in place to achieve that. However, as they’ve been focused on acquiring the pieces and building the integration, the one piece still missing is true business intelligence, and that will be the focus of my next post.

SFA & CRM for BI: Sales and Marketing redux

A few weeks back, I blogged about the necessity for sales and marketing to work in a symbiosis rather than a power struggle. This post is about what that means for SFA and CRM.

In the early 1990s, I worked at a startup named Aurum Software. It was one of the first companies to provide CRM solutions. On a humorous tangent, I’m happy CRM won. Our founder came from the ERP space and he had us call the solution Customer Resource Planning no matter how many times we told him CRP just wasn’t as polite an acronym as competition’s CRM.

One of the keys to the early success of Aurum and its competitors is that the core databases and software were set up to manage all customer facing applications. That mean we had sales, marketing and help desk front ends using the same database, so management could see a full picture of customer interaction.

Unfortunately, it was at a time when both software and hardware were much slower and CRM, as it still is, was very data intensive. To get people going, most ISVs concentrated on one of the three aspects and the split between CRM and SFA was born.

The result, over the past twenty years, was that the two systems rarely talked well with each other. Sales personnel would track their prospects through the funnel, marketing would track prospect and customer touch points in communications, but they were rarely linked and poorly linked if anything.

I worked with multiple companies who couldn’t sales and marketing information with any degree of business intelligence. One would just import the SFA’s first contact and that would always be the sole “lead source” tracked. Another did the opposite, the first campaign that ever received a ping from someone at a company was always considered to be the lead source for campaign analysis. It didn’t matter at either firm that the contact was a couple of years ago and a recent contact (sales or marketing) restarted the process.

One company tried to close the loop but didn’t know how. Marketing tracked campaigns in their CRM system and didn’t in their SFA system. A lead would be passed to sales, then that information would come back and the campaign information was no longer there and the loop couldn’t be closed.

What’s needed is for both sales and marketing to realize that BI requires a tighter link. Fortunately, the power of systems has advanced that better integration is starting to happen. In a bit of foreshadowing, I’ll say that I’m exciting to be going to Salesforce.com’s Salesforce1 World Tour tomorrow. Salesforce burst onto the scene years ago with a simple SaaS SFA platform, but it was clear they always intended more. Now, with the growth of their own CRM offerings, it’s starting to come together. I’m interested in seeing more, asking more, then blogging more.

 

Software marketing: Understanding business and technology

Welcome to my web site. This first post is to introduce myself a bit more than a short About page can do. As that page mentioned, I’m David Teich and have over thirty years of experience in enterprise software with a focus on marketing. What makes me different from most others who have the same years of work? Breadth. I am a true generalist able to take a broad range of high technology experience to bear on your issues. I see the forest, not the trees.

I began my career as a computer operator, and moved through programming, systems analysis, consulting, and sales engineering and into marketing by the mid-1990s. I can work with people in all areas of business to understand their skills and needs in order to synthesize a solution that incorporates an understanding of your business, technologies and markets to create messages to attract the varied stakeholders in a complex modern sales cycle.

It’s not just the depth and breadth of my work experience that serve as a foundation for supporting your needs. I earned my undergraduate from Texas A&M University in the second year of their business computing degree program. I went to Stanford to earn my MS in computer science because I was interested in expert systems to solve business problems. My MBA from Pepperdine University is in marketing.

Consulting? That experience goes back a long way. Teich Communications is the name my parents had for their public relations firm and in which I was first exposed to proofreading, printers, consulting and more. I’ve decided to keep that name because marketing, whether strategic or tactical, is all about communications. I consulted a few times in Silicon Valley and spent six years consulting in Israel.

Today’s software environment is global. In addition to having worked with international markets at US companies, I have lived and worked abroad. That overseas experience strengthens my ability to understand the needs of different markets and global corporations.

I understand both how enterprises make software decisions and how enterprise software works. I can help small to mid-sized software firms better focus marketing to address solutions. Whether you need help understanding strategic aims and positioning or you need basic collateral that attracts specific stakeholders, I can help you improve your communications.