In my work with TIRIAS Research, I’m covering machine learning. As part of that, I am publishing articles on Forbes. One thing I’ve started this month, with two articles, is a thread on management AI. The purpose is to take specific parts of AI and machine learning that are often described very technically, and present them in a way that management can understand what they are and, more importantly, why they provide value to decision making.
New article links now going to Published Articles page
As I post my latest Tech Target article, on Tableau showing the subscription model is winning, I should point out that I’m no longer linking articles via blog entry but, rather, adding them to my Published Articles page. I never blogged much, so that makes sense.
As I have thoughts that I don’t think are formal enough to publish, I’ll still use the blog page; but refer to the articles one for publishing
Webinar Review: AI, The Key To Creating A Next-Gen Banking Experience
VentureBeat hosted a webinar that missed the mark in the title, but is still worth a watch for those interested in how technology is changing the banking industry. Artificial intelligence (AI) was only discussed a few times, but the overarching discussion of the relationship between the young financial technology (fintech) companies and the existing banking infrastructure was of great value.
The speakers were:
- Katy Gibson, VP of Application Products, Envestnet | Yodlee
- Dion F. Lisle, VP Head of FinTech, Capgemini America Inc.
- John Vars, Chief Product Officer, Varo Money
- Keith Armstrong, Co-founder and Chief Operating Officer, abe.ai
- Stewart Rogers, Director of Marketing Technology, VentureBeat Sponsored by Yodlee
The opening question was about the relationship between fintech and banking organizations. The general response was that the current maturity of fintech means that most companies are focusing on one or two specific products or services, while banks are the broad spectrum organizations who will leverage that to provide the solutions to customers. Katy Gibson did point out that while Yodlee does focus on B2B, other fintech companies are trying to go B2C and we’ll have to see how that works out. Dion Lisle suggests that he sees the industry maturing for the next 18-24 months, then expects to see mergers and acquisitions start to consolidate the two types of businesses.
One of the few AI questions, one on how it will be incorporated, brought a clear response from Ms. Gibson. Just as other companies have begun to realize as machine learning and other AI applications begin to be operationalized, clean data is just as important as it always has been. She points out that banking information comes from multiple sources, isn’t clean and is noisy. Organizations are going to have to spend a lot of time and planning to ensure that the systems will be able to be fed useable information that provides accurate insight.
There was an interesting AI-adjacent question, one where I’m not sure I agree with the panelists. Imagine a consumer at home, querying Alexa, Siri, or other AI voice system and asking a financial question, one such as whether or not personal financial systems are good to buy a specific item. If the answer that comes back is wrong, who will the consumer blame?
The panelist consensus seems to be that they will blame the financial institution. I’m not so sure. Most people are direct. They blame the person (or voice system) in front of them. That’s one reason why customer support call centers have high turnover. The manufacturing system might be to blame for a product failure, but it’s the person on the other end of the line who receives the anger. The home AI companies will need to work with all the service providers, not just in fintech, to ensure not just legal agreements specify responsibility, but that also the voice response reflects the appropriate agreements.
The final item I’ll discuss was a key AI issue. The example discussed was a hypothetical where training figured out that blue eyed people default on home loans more often. What are the legal ramifications of such analysis. I think it was Dion (apologies if it was someone else), pointed out the key statistical statement about correlation not meaning causality. It’s one thing to recognize a relationship, it’s another to assume one thing causes another.
Katy Gibson went further into the AI side and pointed out that fintech requires supervised learning in the training of machine systems. It’s not just the pure correlation/causality issues that matter. Legal requirements specify anti-discrimination measures. That means that unsupervised learning is not just finding false links, it could be finding illegal ones. Supervised learning means data sets including valid and invalid results must be used to ensure the system is trained for the real world.
There were more topics discussed, including an important one about who owns privacy, but they weren’t related to AI.
It was an interested webinar with my usual complaint about large panels: There were too many people for the short time. All of these folks were interesting, but smaller groups and a more tightly focused discussion would have better served the audience.
TDWI Webinar Review: IoT’s Impact on Data Warehousing: Defining IoT in Terms of Its Data Requirements
Two TDWI webinars in one week? Both sponsored by SAP? Today’s was on IoT impacting data warehousing, and I was curious about how an organization that began focused on data warehousing would cover this. It ended up being a very basic introduction to IoT for data warehousing. That’s not bad. In fact. it’s good. While I often want deeper dives than presenters give, there’s certainly a place for helping people focused on one arena, in this case it’s data warehousing, get an idea of how another area, IoT, could begin to impact their world.
The problem I had was how Philip Russom, Senior Research Director for Data Management, TDWI, did that. I felt he missed out on covering some key points. The best part is that, unlike Tuesday’s machine learning webinar, SAP’s Rob Waywell, Director Hana Project Management, did a better job of bringing in case studies and discussing things more focused on the TDWI audience.
Quick soap box: Too many companies don’t understand product marketing so they under utilize their product marketers (full disclosure: I was one). I strongly feel that companies leveraging product marketing rather than product management in presentations will be more able to address business concerns rather than being focused on the products. Now, back to our regular programming…
One of the most interesting takeaways from the webinar was a poll on what level of involvement the audience has with IoT. Fifty percent of the responders said they’re not collecting IoT data and have no plans to do so. Enterprise data warehouses (EDW) are focused on high level, aggregated data. While the EDW community has been moving to blend more real time data, it tends to be other departments who are early into the IoT world. I’m not surprised by the results, nor am I worried. The expansion of IoT will bring it in to overlap EDW’s soon enough, and I’d suggest that that half of the audience is aware things will be changing and they have the foresight to begin to pay attention to it.
IoT Basics for EDW Professionals
Mr. Russom’s basic presentation was good, and folks who have only heard about IoT would do well to listen to it. However, they should be aware of a few issues.
Philip said that “the tendency is to push analytics out to the devices.” Not wholly true, and the reason is critical. A massive amount of data is being generated by what are called “edge devices.” Those are the cars, refrigerators, manufacturing robots and other devices that stream information to the core servers. IoT data is expected to far exceed the web and social media data often referred to as big data. That means that an efficient use of the internet means that edge analytics are needed to aggregate some information to minimize traffic flow.
Take, for instance, product data. As Rob Waywell mentioned, many devices create lots of standard data for which there is no problems. The system really only cares about exceptions. Therefore, an edge device might use analytics to aggregate statistics about the standard occurrences while immediately passing exceptions on to be handled in real-time.
There is also the information needed for routing. Servers in the core systems need to understand the data and its importance. The EDW is part of a full data infrastructure. the ODS (or data lake as folks are now calling it) can be the direct target of most data, while exceptions could be immediately routed to other systems. Whether it’s the EDW, ODS, or other system, most of the analysis will continue in core systems, but edge analytics are needed.
SAP Case Studies
Rob Waywell, as mentioned above, had the most important point of the presentation when he mentioned that IoT traffic is primarily about the exceptions. He had a couple of quick case studies to talk about that, and his first was great because it both showed IoT and it wasn’t about cars – the most used example. The problem is that he didn’t tie it well into the message of EDWS.
The case was about industrial worker safety in the area of gas detection and response. He showed the different types of devices that could be involved, mentioned the multiple types of alert, and described different response paths.
He then mentioned, with what I felt wasn’t enough emphasis (refer to my soap box paragraph above), the real power that a company such as SAP brings to the dance that many tinier companies can’t. In an almost throwaway comment, Mr. Waywell mentioned that SAP Hana, after managing the hazardous materials release instance, can then communicate to other SAP systems to create the official regulatory reports.
Think about that. While it doesn’t directly impact the EDW, that’s a core part of integrated business systems. That is a perfect example of how the world of IoT is going to do more than manage the basics of devices but also be used to handle the full process for with MIS is designed.
Classifications of IoT
I’ll finish up with a focus that came up in a question during Q&A. Philip Russom had mentioned an initial classification of IoT between industrial and consumer applications. That misses a whole lot of areas, including supply chain, logistics, R&D feedback, service monitoring and more. To lump all of that into “manufacturing” is to do them a disservice. The manufacturing term should be limited to the actual manufacturing process.
Rob Staywell then went a different direction. He seemed to imply the purpose of IoT was solely to handle event-driven, real-time, actions. Coming from a product manager for Hana, that’s either an understandable mistake or he didn’t clearly present his view.
There is a difference between IoT data to be operationalized and that to be analyzed. He might have just been focusing on the operational aspects, those that need to create immediate actions, without minimizing the analytical portion, but it wasn’t clear.
Summary
This was a webinar that is good for those in the data warehousing and core MIS functions who want to get a quick introduction to what IoT is and what might be coming down the pike that could impact their work. For anyone who already has a good idea of what’s coming and wants more specifics, this isn’t needed.
TDWI Webinar Review: Putting Machine Learning to Work in Your Enterprise
It’s been a while since I watched a webinar, but since business intelligence (BI) and (AI) are overlapping areas of interest, I watched Tuesday’s TDWI webinar on Machine Learning (ML). As the definition of machine learning expands out of the pure AI because of BI’s advanced analytics, it’s interesting to see where people are going with the subject.
The host was Fern Halper, VP Research, at TDWI. The guests were:
- Mike Gualtieri, VP, Forrester,
- Askhok Swaminathan, Senior Director, Product Management, SAP,
- Chandran Saravana, Senior Director, Advanced Analytics, SAP.
Ms. Halper began with a short presentation including her definition of ML as “Enabling computers to learn patterns with minimal human intervention.” It’s a bit different than the last time I reviewed one of her webinars, but that’s ok because my definition is also evolving. I’ve decided to use my own definition, “Technology that can investigate data in an environment of uncertainty and make decisions or provide predictions that inform the actions of the machine or people in that environment.” Note that I differ from my past, purist, view, of the machine learning and adjusting algorithms. I’ve done so because we have to adapt to the market. As BI analytics have advanced to provide great insight in data discovery, predictive analytics and more, many areas of BI and the purist area of learning have overlapped. Learning patterns can happen through pure statistical analysis and through self-adaptive algorithms in AI based machines.
The most interesting part of Fern Halper’s segment was a great chart showing the results of a survey asking about the importance of different business drivers behind ML initiatives. What makes the chart interesting, as you can see, is that it splits results between those groups investigating ML and those who are actively using it.
What her research shows is that while the highest segments for the active categories are customer related, once companies have seen the benefits of ML, the advantages of it for almost all the other areas jump significantly over views held during the investigation phase.
A panel discussion then proceeded, with Ms. Halper asking what sounded like pre-discussed questions (especially considering the included and relevant slides) to the three panelists. The statements by the two SAP folks weren’t bad, they were just very standard and lacked any strong differentiators. SAP is clearly building an architecture to leverage ML using their environment, but there weren’t case studies and I felt the integration between the SAP pieces didn’t bubble up to the business level.
The reason to listen to this segment is Mr. Gualtieri. He was very clear and focused on his message. While I quibble with some of the things he said about data scientists, that soap box isn’t for here. He gave a nice overview of the evolving state of ML for enterprise. The most important part of that might have been missed by folks, so I’ll bring it up here.
Yes, TensorFlow, R, Python and other tools provide frameworks for machine learning implementations, but they’re still at a very technical level. They aren’t for business analysts and management. He mentioned that the next generation of tools are starting to arrive, one that, just like the advent of BI, will allow people with less technical experience to more quickly use models in and gain insights from machine learning.
That’s how new technology grows, and I’d like to see TDWI focus on some of the new tools.
Summary
This was a good webinar, worth the time for those of you who are interested in a basic discussion of where machine learning is within the enterprise landscape.
Cloudera Now, a mini-conference on data, analytics and machine learning, is a good overview
Cloudera held a pretty impressive web event this morning. It was a mini-conference, with keynotes, some breakout tracks and even a small vendor area. The event was called Cloudera Now, and the link is the registration one. I’ll update it if they change once it’s VOD.
The primary purpose was to present Cloudera as the company for data support in the rapidly growing field of Machine Learning (ML). Given the state of the industry, I’ll say it was a success.
As someone who has an MS focused on artificial intelligence (ancient times…) and has kept up with it, there were holes, but the presentations I watched did set the picture for people who are now hearing about it as a growing topic.
The cleanest overview was a keynote presentation by Tom Davenport, Professor of IT and Management, Babson College. That’s worth registering for those who want to get a well presented overview.
Right after that, he and Amy O’Conner, Big Data Evangelist at Cloudera, had a small session that was interesting. On the amusing side, I like how people are finally beginning to admit that, as Amy mentioned, that the data scientist might not be defined as just one person. I’ll make a fun “I told you so” comment by pointing to an article I published more than three years ago: The Myth of the Data Scientist.
After the keynotes, there were three session of presentations, each with three seminars from which to choose. The three I attended were just ok, as they all dove too quickly into the product pitches. Given the higher level context of the whole event, I would have liked them all to spent more time discussing the market and concepts far longer, and then had much briefer pitch towards the end of the presentations. In addition, they seem too addicted to the word “legacy,” without some of them really knowing what that meant or even, in one case, getting it right.
However, those were minor problems given what Cloudera attempted. For those business people interested in hearing about the growing intersection between data, analytics, and machine learning, go to Cloudera’s site and take some time to check out Cloudera Now.
P-values and what they mean for business intelligence and data scientists
I’d been thinking of writing a column on p-values, since the claim that data “scientists” can provide valuable predictive analytics is a regular feature of the business intelligence (BI) industry. However, my heavy statistics are years in my past. Luckily, there’s a great Vox article on p-values and how some scientists are openly stating that P<.05 isn’t stringent enough.
It’s a great introduction. Check it out.
Data ingestion or data indigestion ?
Data Lakes, the renamed ODS, aren’t the only solution for accessing data. Think actual need, understand supporting metadata, then build your data ingestion plan. Read my latest TechTarget column.
TDWI, CTG & SAP Webinar: Come for the misrepresentation of machine learning, stay for the fantastic case study
The webinar I’m going to review is worth seeing for one reason: The Covenant Transportation Group case study is a fantastic example of what’s going on in BI today. It’s worth the time. IMO, skip the first and third presenters for the reasons described below.
Machine Learning – Yeah, Right…
I usually like Fern Halper’s TDWI presentations. They’re usually cogent and make sense. This, sadly, was why the word “usually” exists. The title of the presentation was “Machine learning – What’s all the hype about?” and Ms. Halper certainly provided the hype.
She started off with a definition of machine learning: “Field of study that gives computers the ability to learn without being explicitly programmed,” by Arthur Samuel (1959). The problem with the rest of the presentation is that’s still true but the TDWI analyst bought into the BI hype that the definition has changed. She presents it now as complex analytics and “Enabling computers to learn patterns.” No, it’s not – except in our sector as people and companies try to jump on the machine learning bandwagon.
Our computers, at all levels of hardware, networking and software are far faster than even a decade ago. That allows for more complex algorithms to be run. That we can now analyze information much faster doesn’t suddenly make it “machine learning.”
There’s also the problem that seems to exist with many, that of conflating artificial intelligence (AI) with expert systems. AI is simply what we don’t know about intelligence and are trying to learn how to program. When I studied AI in the 80s, robotics and vision were just becoming well known enough to be their own disciplines and left the main lump of AI problems. Natural Language Processing (NLP) was starting to do the same.
Yet another main problem I’ll discuss is another conflation of analytics and learning. Fern Harper listed and mentioned, more than once, the “machine learning algorithms.”
Note that all except neural networks are just algorithms that were defined and programmed long before machine learning. Machine learning is the ability of the software to decide which to use, how to change percentages in decision tree branches and other autonomous decisions that directly change the algorithm. It’s not the algorithm running fast and “finding things.”
Neural networks? They’re not even an algorithm, they’re only a way to process information. Any of the algorithms could run in a neural network architecture. The neural network is software that imitates brains and uses multiple simple nodes. Teaching a neural network any of the algorithms is how they work.
Covenant Transportation Group – A Great Analytics Case Study
So forget what they tried to pitch you about machine learning. The heart of the webinar was a presentation by Chris Orban, VP, Advanced Analytics, Covenant Transportation Group (CTG), and that was worth the price of admission. It was a great example of how modern analytics can solve complex problems.
CTG is a holding company for a number of transportation firms. Logistics is a KPI. There were two main issues discussed, but worthy of mention.
The first example was the basic issue of travel routing. It’s one thing to plan routes based on expected weather. It’s quite another to change plans on the fly when weather or other road conditions (accidents, construction and more) crop up. The ability for modern systems to bring in large volumes of data, including weather, traffic and geospatial information, means CTG is able to improve driver safety and optimize travel time through rapid analysis and communications.
The second example was mentioned in the previous sentence: Driver safety. CTG, and the industry as a hole, has enormous driver turnover. That costs money and increases safety risks. They use algorithms to identify problem indicators that help identify potential driver problems before they occur, allowing both the drivers and companies to take corrective actions. A key point Chris mentions is that communications also helps build the relationships that also help lower driver turnover.
Forget that nothing he mentioned was machine learning. CTG is a great case study about the leading edge of predictive analytics and real-time (real world, real-time, that is) BI.
SAP
SAP was the webinar sponsor, so David Judge, VP and Leonardo Evangelist, SAP, wrapped up the webinar. I was hoping he’d address SAP’s machine learning, to see if it’s the real definition of the phrase or only more hype. Unfortunately, we didn’t get that.
SAP has rolled out their Leonardo initiative. It’s a pitch for SAP to lump all the buzzwords under a brand, claiming to link machine learning, data intelligence, block chain, big data, Internet of Things and analytics all under one umbrella. Mr. Judge spent his time pitching that concept and not talking about machine learning.
The CTG case study makes is clear that SAP is supporting some great analytics, so they’re definitely worth looking at. Machine learning? Still a big question mark.
I did follow up with an email to him and I’ll let folks know if I hear anything informative.
BI Buzzwords for business management: Self-service and machine learning briefly explained
I’ve seen a few company webinars recently. As I have serious problems with their marketing, but don’t wish that to imply a problem with technology, this post will discuss the issues while leaving the companies anonymous.
What matters is letting business decision makers separate the hype from what they really need to look at when investigating products. I’m in marketing and would never deny its importance, but there’s a fine line between good marketing and misrepresentation, and that line is both subjective and fuzzy.
As the title suggests, I’ll discuss the line by describing my views of two buzzwords in business intelligence (BI). The first has been used for years, and I’ve talked about it before, it’s the concept of self-service BI. The second is the fairly new and rapidly increasing use of the word “machine” in marketing
Self-Service Still Isn’t
As I discussed in more detail in a TechTarget article, BI vendors regularly claim self-service when software isn’t. While advances in technology and user interface design are rapidly approaching full self-service for business analysts, the term is usually directed at business users. That’s just not true.
I’ve seen a couple of recent presentations that have that message strewn throughout the webinars, but the demonstrations show anything but that capability. The linking of data still requires more expertise that the typical business user needs. Even worse, some vendors limit things further. The analysts still create basic reports and templates, within which business people can wander with a bit of freedom. Though self-service is claimed, I don’t consider that to approach self-service.
The result is that some companies provide a limited self-service within the specified data set, a self-service that strongly limits discovery.
As mentioned, that self-service is either misunderstood or over promised doesn’t obviate that the technology still allows customers to gain far more insight than they could even five years ago. The key is to take the promises with a grain of salt.
When you see it, ignore the phrase “self-service.”
Prospective BI buyers need to focus on whether or not the current state of the art presents enough advantages over existing corporate methodologies to provide proper ROI. That means you should evaluate vendors based on specific improvements on your existing analytics and the products should be rigorously tested against your own needs and your team’s expertise.
Machine
Machine learning, to be discussed shortly, has exploded in usage throughout the software industry. What I recently saw, from one BI vendor, was a fun little marketing ploy to leverage that without lying. That combination is the heart of marketing and, IMO, differs from the nonsense about self-service.
Throughout the webinar, the presenter referred to the platform as “the machine.” Well, true. Babbage’s machines were analytic engines, the precursors to our computers, so complex software can reasonable be viewed as a machine. The usage brings to mind the concept of machine learning while clearly claiming it’s not.
That’s the difference, self-service states something the products aren’t while machine might vaguely bring to mind machine learning but does not directly imply that. I am both amused and impressed by that usage. Bravo!
Machine Learning and Natural Language Processing
This phrase needs a larger article, one I’m working on, but I would be remiss to not mention it here. The two previous sections do imply how machine learning could solve the self-service problem.
First, what’s machine learning? No, it’s not complex analytics. Expert systems (ES) are a segment of artificial intelligence focused on machines which can learn new things. Current analytics can use very complex algorithms, but they just drive user insight rather than provide their own.
Machine learning is the ability for the program to learn new things and to even add code that changes algorithms and data as it learns. A question to an expert system has one answer the first time, and a different answer as it learns from the mistakes in the first response.
Natural Language Processing (NLP) is more obvious. It’s the evolving understanding of how we speak, type and communicate using language. The advances have meant an improved ability for software to responds to people without clicking on lots of parameters to set search. The goal is to allow people to type or speak queries and for the ES to then respond with information at the business level.
The hope I have is that the blend will allow IT to set up systems that can learn the data structures in a company and basic queries that might be asked. That will then allow business users to ask questions in a non-technical manner and receive information in return.
Today, business analysts have to directly set up dashboards, templates and other tools that are directly used by business, often requiring too much technical knowledge. When a business person has a new idea, it has to go back to a slow cycle where the analyst has to hook in more data, at new templates and more.
When the business analyst can focus on teaching the ES where data is, what data is and the basics of business analysis, the ES can focus on providing a more adaptable and non-technical interface to the business community.
Machine learning, i.e. expert systems, and NLP are what will lead to truly self-service business applications. They’re not here yet, but they are on the horizon.