Tag Archives: data governance

TDWI Webinar — Engaging the Business, again from the technologist’s perspective

This week’s TDWI hosted webinar was about engaging business and, once again, it came from the standpoint of technologists rather than from business. There were some very good things said. However, until our industry stops thinking of business knowledge workers as children to be tutored and begins to think about them as people whose knowledge is the core of what we must encapsulate, we’ll continue to miss the mark and adoption of solutions will remain slow.

The main presenter was David Loshin, President of Knowledge Integrity. He began the presentation with a slide that describes his view of the definition of “data driven,” including three main points:

  • Focus on turning data into actionable knowledge that can lead to increased corporate value.
  • Aware of variance that can cause inconsistent interpretation.
  • Coordination among data consumers to enforce standards for utilization.

We should all clearly understand that the first item is not new and was not created by the business intelligence (BI) industry. Business has always been data driven. What we’re able to do now is access far more data than ever before so that we can provide a more robust view of the corporation.

Inconsistent Data v Inconsistent Utilization

The second bullet is a core point. Mr. Loshin used a couple of example such as sales territory and other areas where definitions are fuzzy. One clear difference to me is one I directly experienced 25 years ago, and more directly addresses the visualization side of the BI conundrum. I was working for a major systems integrator (SI) and my client was, well, let’s just say it was a large, fruit based computing company.

A different SI had created an inventory system for the client’s manufacturing facility but the system was a failure though all the right data was in the system. The problem was that the reports were great for the accounting department, not for inventory and manufacturing. We interviewed the inventory team and then rewrote reports to address and present the information from their standpoint.

Too often, technologists get lost in the detailed data definitions and matching fields across data sources. That is critical, but it loses the big picture. Even when data is matched, different business people use data differently.

Which brings us to David Loshin’s third point. No, we don’t need to enforce exact standards for utilization. We need to ensure that the data each consumer refers to is consistent, but we must do a better job in understanding that different departments can utilize the exact same data in a variety of ways.

Business Drivers and Data Governance

David did get to the key issue a bit later, on a slide titled Operationalizing Business Policies. He points out that it’s critical to ensure that “Information policies model the data requirements for business policy.” This is key and should be bubbled up higher in the mindset of our industry. While I hear it mentioned often, it seems to be honored more in the breach.

Time was spent discussing the importance of understanding different users and their varying utilization of data. As I mentioned in the introduction, the solution to the new complexities then veers from addressing business needs to ignoring history. In a previous blog post, I discussed how many in the industry seem to be ignoring the lessons to be learned from the advent of the PC. Mr. Loshin seems to be doing that when he talks about empowering the business users to set their own usability rules. He splits IT and business in the following way:

  • Business data consumers are accountable for the rules asserting usability for their views of the data.
  • IT becomes responsible for managing the infrastructure that empowers the business user.

The issue I have with that argument is a phrase that didn’t appear in this webinar until Linda Briggs, the moderator, mentioned it in a poll question right before Q&A: Data Governance. Corporations are increasingly liable for how they control and manage information. It does not make sense to allow each user to define their own data needs in a void. Rather than allow for massively expanded and relatively uncontrolled access to data and then later have to contract access, as corporations had to regain a handle on what was being done on scattered desktop computers, BI vendors should be positioning data governance from the start.

Whether it’s by executive fiat, a cross-functional team, or some other method, companies need to clarify data governance rules. Often, IT is the best intermediary between groups, actively participating in data governance definition as an impartial observer and facilitator. It is then the job of IT to ensure that it provides as open access as possible to business workers given their needs and the necessity of following governance rules.

There was one question, during Q&A, on the importance of data governance. I thought David Loshin again understated its importance while Harald Smith, Director of Product Management at Trillium, the webinar sponsor, had the comment that “everyone is responsible for data governance.” That is my only mention of the sponsor, as I felt his portion of the presentation was a recitation of sound bites, talking points and buzz words that didn’t provide any value to the hour.

Summary

David Loshin has a clear view of engaging the business and gets a number of key things correct. However, that view is one of a technologist looking over a self-imagined bridge separating technology and business. There’s not a bridge separating IT and business. They overlap in many critical areas and both must learn from and work well with each other.

Data Governance and Self-Service Business Intelligence: History Repeating?

Self-Service BI is a big buzz phrase these days even though many definitions exist. However, one thing is clear: It’s driving another challenge in the area of data governance. While people are starting to talk about this, it’s important to leverage what we’ve learned from the past. Too many technology industry folks are so enamored by the latest piece of software or hardware that they convince themselves their solutions are so new they are revolutionary, “have no competitors” or otherwise rationalize context. However, the smart people won’t do that.

A Quick History Lesson: The PC

In 1982, I was an operator at one of Tymshare’s big iron floors. It was a Sunday and I was reading my paper sitting at the console of an IBM 370/3033, their top of the line business computer. An article in the business section was an article titled something like “IBM announces their 370 on a chip.” I looked up at my behemoth, looked back at the article and new things would change.

Along came the PC. Corporate divisions and departments frustrated at not getting enough resources from the always understaffed, under financed and overburdened IT staff jumped on the craze. Out with the IBM Selectric and in with the IBM AT and its successors and clones.

However, by the end of the decade and early in the 1990s, corporate executives realized they had a problem. While it was great that each office was becoming more productive, the results weren’t as helpful. It’s a lot harder to roll-up divisional sales data when each territory has a slightly different definition of their territories, lead and funnels. It’s hard to make manufacturing budget forecasts when inventory is stored in different formats and might use different aging criteria. It’s hard to show a government agency you’re in regulatory compliance when the data in in multiple and non-integrated systems.

Data governance had been lost. The next twenty years saw the growth of client server software such as that by Oracle and SAP, working to link all offices to the same data structures and metadata while still working to leave enough independence. That balance between centralized IT control and decentralized freedom of action is still being worked out but is necessary.

While the phrase “single version of truth” is often mistakenly applied to mean a data warehouse and a “single source of truth,” that’s not what it means. A single version of the truth means shared data and metadata that ensures that all parties looking at the same data come up with the same information – if not the same conclusions from that information.

Now: Self-Service BI

Look at the history of the BI market. There have always been reports. With the advent of the PC, we had the de facto standard of Crystal Reports for a generation. Then, as the growth of packaged ERP, CRM, SFA and other systems came along, so did companies such as Cognos and Business Objects to focus on more complex analysis. However, they were still bound by the client/server model that was tied primarily to mid-tier Unix servers and Microsoft/Apple PCs.

What’s changed now are the evolution of the internet into the Cloud and phones into smartphones and tablets. Where divisions and departments were once leashed to big iron and CICS screens, divisions who have been more recently tied to desktops are feeling their oats and interested in quickly developing their own applications that allow their knowledge workers to access information while not seated in the office.

Self-Service BI (And, no, I’m not going to make an acronym as many have. Don’t we have enough?) is the PC of this decade. It’s letting organizations get information to people without waiting for IT, who’s still underfunded, understaffed and overburdened, distribute information widely. Alas, that wide distribution comes without controls and without audit trails. Data governance is again being challenged.

I’ve listened to a number of presentations by vendors to the BBBT, and there is hope. Gone are the days when all BI companies talked about was in helping business people avoid using IT. There’s more talk about metadata, more interest in security and access control, and a better ability to provide audit trails. There’s an understanding that it’s great to allow every knowledge worker to look at the data and understand those pieces of information arising that address their needs while still ensuring that the base data is consistent and metadata is shared.

Summary

We can learn from history. The PC was a great experiment in watching the pendulum swing from almost complete IT control to almost no IT control then back to a more reasonable middle. The BI community shows signs of learning from history and making a much faster switch to the middle ground. That’s a great thing.

Technologists working to help businesses improve performance through data, BI and analytics need to remember the great quote from Daniel Patrick Moynihan, “Everyone is entitled to his own opinion, but not his own facts.”

Yellowfin at BBBT: Visualization and Data Governance Begin to Meet

Last Friday’s BBBT presentation was by Glen Rabie, CEO, and John Ryan, Product Marketing Director, from Yellowfin. I reviewed their 7.1 release webinar in late August but this was a chance to hear a longer presentation focused for analysts.

Their first focus was on the BARC BI Survey 14. One point is that they were listed as number one, by far, in how many sites are using the product in a cloud environment. That’s interesting because Yellowfin does not offer a cloud version. This is corporations installing their own versions on cloud servers.

A Tangent: Cloud v On-Premises?

That brings up an interesting issue. Companies like to talk about cloud versus on-premises (regardless of the large number of people who don’t seem to know to use the “s”) installations, but that’s not really true. Cloud can be upper case or lower. Upper case Cloud computing is happening in the internet, outside a company’s firewall. However, many server farms, both corporate owned and third party, are allowing multi-server applications to run inside corporate firewalls the same way they’d run outside. That’s still a cloud installation by the technical methodology, but it’s not in the Cloud. It’s on-premises in theory, since it’s behind the firewall.

Time for a new survey. We’re talking about multi-server, parallel processing applications verses single server technology. What’s a good name for that?

Back to Our Regularly Scheduled Diatribe

One bit of marketing fluff I heard is that they claim to be the first completely browser based UI. I’ve heard from a number of other vendors who have used HTML5 to provide pure browser interfaces, so I don’t know or care if they were first. The fact that they’re there is important, as is the usability of the interface. The later matters more. As I mentioned in the v7.1 review, they don’t hide that they’re focused on the business analyst rather than the end user, and for that target audience it is a good interface.

An important issue that points to a maturation of business intelligence in the market place was indicated by a statement John Ryan made about their sales. Yellowfin used to be almost exclusively based on sales to small pilot projects, then working to increase the footprint in their clients. He mentioned that they’ve seen a recent and significant increase in the number of leads that are coming into the funnel as full enterprise sales from the start. That’s both a testament to IT reviewing and accepting the younger BI companies and to Yellowfin’s increased visibility in the market.

“All About the Dashboard” and Data Governance

Glen and John repeatedly came back to the idea that they’re all about providing dashboards to the business user, focusing on letting technical people do discovery and the tough work then just addressing visualization for the end user. The idea that the technical people should do the detailed discovery and the business user show just look at things, slicing and dicing in a limited fashion, might be a reason they’re seeing more enterprise sales.

They seem to be telling IT that companies can get modern visualization tools while still controlling the end users. That’s still a priests at the temple model. That’s not all bad.

On one side, they’ll continue to frustrate end users by limiting access to the information they want to see. On the other side, many newer firms are all about access and don’t consider data governance. Yes, we want to empower business knowledge workers, but we also need to help companies with regulatory and contractual requirements for data governance.

Yellowfin seems to be walking a fine line. They have some great data governance capabilities built in, with access control and more. One very useful function is the ability to watermark reports as to their approved status within the company. It might seem minor, but helping viewers understand the level of confidence a firm has in certain analysis is clearly an advantage.

An interesting discussion occurred in the session and on Twitter about a phrase used in the presentation: Governed Data Discovery. Some analysts think it’s an oxymoron, that data discovery shouldn’t be governed or limited or it’s not discovery. I think it makes a lot of sense because of the need for some level of controls. Seeing all data makes no sense for many reasons, and governance is required. Certainly, too tight governance and control is a problem, but I like where Yellowfin seems to be going on that front.

But What About the Rest of BI?

As mentioned, Yellowfin is working to let analysts build reports and help knowledge workers consume reports. However, the reports are built from data. Where’s that come from? Who knows?

When I asked how they get the information, they clearly stated they weren’t interested in the back end of BI, not ETL, not databases. They’re leaving that to others. That’s a risk.

Glen Rabie pointed out, earlier in the presentation, that many of their newer clients are swap-outs of older BI technologies. For instance, he said two of his more recent clients in Japan had swapped out Business Objects for Yellowfin. Check the old Business Objects press releases from customers in the last couple of decades. The enterprise sales weren’t “Business Objects sells…” but rather “Business Objects and Informatica,” “Business Objects and Teradata,” etc. visualization is the end of a long BI process and enterprises want the full information supply chain.

As long as Yellowfin is both clear about its focus and prepared to work closely with other vendors in joint sales situations then it won’t be a problem as the company grows. They need to be prepared for that or they’ll slow the sales cycle.

Social Media Overthought

The final major point is about Yellowfin’s functionality for including social media within the product to enhance collaboration. While the basic concept is fine and their timeline functionality allows a team to track the evolution of the reports, I have two issues.

First, the product doesn’t link with other corporate-wide social tools. That means if a Yellowfin user wants to share something with someone who doesn’t need to use the tool, a new license is needed. I know that helps Yellowfin’s top line, but I think there should be some easy way of distributing new analysis for feedback from a wider audience without a full license.

Second, and much less important, is the mention of allowing people to vote on the reports. I was amused. It reminded me of a great quote from the late Patrick Moynihan, “Everyone is entitled to his own opinion, but not to his own facts.” I think the basic social tool in Yellowfin is very useful, but voting on facts seems a tad excessive.

Summary

Glen Rabie and John Ryan gave a great overview of Yellowfin, covering both the company’s strategy and the current state of product. Their visualization is as good as most others and they have some of the most advanced data governance capabilities in the BI industry.

There’s a lot of good going on down under. Companies wanting modern visualization tools should take a look, with one caveat. If you think that the power of modern systems means that functionality is clearly moving forward and should allow business users to do more than they have been able to do, Yellowfin might not match up with other firms. If you think that end users only want dashboards and want a good way of providing business workers with those dashboards, call now.