4 Barriers to Big Data Success — and Ways to Overcome Them
Updated · May 13, 2013
The good news: At a recent IBM conference on Big Data, users confirmed a growing body of evidence from sources such as Sloan Management Review, showing that success in Big Data does deliver bottom-line, competitive-advantage benefits. The bad news: There are significant barriers to a successful Big Data strategy implementation and evolution — and these barriers are tough to crack.
More good news: There are ways to overcome Big Data barriers to success, and many firms are already succeeding in moving beyond the barriers and enjoying an ongoing Big Data competitive advantage. Specifically, users cited the following four Big Data barriers (in no particular order):
- “Reskilling” the work force to take advantage of Big Data analytics;
- Delivering a clear business case for an optimal Big Data strategy;
- Overcoming cultural and communications barriers within and between IT and the rest of the business in order to achieve a synchronized strategy; and
- Handling integration of the new types of data that begin to flood in as Big Data is implemented.
Let’s take these one by one, and discuss some effective ways that have already been identified to overcome each barrier.
Reskill with IT and Corporate Commitment
“Reskilling” involves both training IT personnel in the new technologies involved in supporting Big Data analytics and enabling a significant portion of the rest of the company to create and/or use Big Data analytics in key business functions. This is not just a matter of training courses in a few minor new skills; the most successful companies focus on using Web data to understand their customers, and that strategy carries with it much greater “reskilling” requirements.
Thus, for greatest effect, IT needs to understand how to handle a major new body of very different data – different because it involves social media, because it comes from outside the company, and because it often involves streaming “sensor-derived” data, with “sensors” ranging from GPS tracking devices to customer smartphone pictures and video. Likewise, for maximum positive impact, an unusually large proportion of the business need to fold into their processes regular analysis of what this Big Data is telling them about the firm’s customers.
For example, marketing and sales execs who are used to employing “intuition” and selective one-to-one contacts to drive product marketing and development need to shift over to more data-driven product strategies – as the agile marketing movement also recommends.
One part of a successful way to overcome this barrier is simply the tried-and-true, top-level corporate commitment. There should be be a corporate champion or a focus on Big Data reskilling at the top corporate level – which, in turn, in IT’s case, will be reflected inevitably in IT reskilling imperatives.
A second part of this approach is to ensure that the focus among corporate Big Data users is much more on understanding the customer and leveraging that understanding, and much less (contrary to what most less successful companies assume) on cutting costs and improving business processes. Typically, the chief marketing officer (CMO) takes the lead in identifying methods of doing this – often by example.
A third way of handling the reskilling barrier is not only to identify the new skills needed and the new tools needed to support these skills (e.g., JSON and Hadoop for IT, in-depth ad-hoc querying for business users ) but also to customize these for the needs of the particular business. Users at the conference testified that a particular business will require a significant effort to prepare or “train” a Big Data implementation in finding particular parts of Web data meaningful only to an oil/gas company, say, or a major retailer.
Focus Business Case on Opportunity, Not Cost
Big Data seems like another case of spending on new resources. For the past six years of recession or slow revenue growth, companies have typically been focused more on driving profits by cutting costs, and less on undertaking major new projects. The business case for each Big Data initiative, therefore, not to mention the entire strategy, is apparently often perceived as tough to make.
One often-used approach to making a business case, therefore, has been to stress the very real benefits in using internal Big Data for business process improvements. There is now plenty of evidence that going more in-depth on processes ranging from computing network optimization to customer interactions cuts the costs of these processes. This evidence can easily be cited in making a business case for Big Data.
Another approach – sometimes labeled FUD, for “Fear, Uncertainty, and Doubt” – focuses on the risks of being blindsided by market changes if the corporation doesn’t react fast enough. Today’s increased risk is a major concern of most businesses, but this risk-based business case suffers from the disadvantage of being tougher to document while at the same time not involving immediately cutting costs.
A possibly more effective approach might be called the opportunity-focused business case. That is, the business case – often stressing customer engagement or innovation – places comparatively greater stress on opportunities for cost-effective, top-line business results by improving product desirability, by improving customer relationships, by leveraging customer and Web-community input and dialogue to create buzz and identify new target markets. The cost savings – e.g., wasting less money on the wrong channels – are presented as likely byproducts of the better “fit” with the customer. There are also cases to which to point in support of this argument.
Break Through “Soft” Barriers with Automation
The barriers that Big Data implementers encounter relating to business culture, IT culture and communication between the two are typically part of a long-standing, hard-to-tackle “soft” barrier within business and within IT. This barrier is often expressed as “getting IT in sync with the business strategy.” From my point of view, much of the friction happens because neither is adequately thinking agilely and proactively about solutions for the customer – but whether you accept that or not, hard to tackle does not mean impossible to tackle.
As it turns out, recent Big Data solutions carry with them a new ability to automate unnecessary tasks, both on the business end and on the IT end. That very automation encourages more communication on a positive level – that of delivering deeper, actionable insights to do things better. To give a more concrete example: Automating much of the task of creating a typical ad-hoc query relieves IT and the business user of those tasks, and the very process of creating that more automated query gives IT a better insight into the business’ needs and tells the business user more about how IT can help.
The list doesn’t end there. Some systems allow ongoing training in new, business-specific “expert domains,” allowing both other business users to tap into that knowledge with much less effort (i.e., more frequently) and IT to understand that business area’s needs better.
Still, simply automating is not enough; or else breaking through soft barriers would be a breeze. The key to greater success, it appears, is to do the hard work of “engaging with the customer” – and, in particular, the “data scientists” who create ad-hoc queries as part of a profession of generating in-depth analytical insights from the business end. These folks move fast, and many presently see IT is an active impediment. Tailoring rapid automation of Big Data and tuning it to the business-specific needs of these users above all seems to be part of the way to overcome soft barriers.
The other critical point here, as it seems to be elsewhere, is that such automation should be aimed less than it often is at cutting business-process costs, and more at yielding customer insights (often in the process of improving customer-facing business processes). Start with automating analysis involving customer data, some successful implementers suggest, and you will often get bigger cultural buy-in.
Crush Data Integration Barriers by Making Speed the Bottom Line
As one user at the IBM conference put it (and others nodded in agreement), it is already hard to keep up with the new Big Data data types flowing in from the Web, and it’s getting harder and harder. Specifically, data models designed for rarely-changing databases go through a lengthy process to allow large changes. The requirements of data governance that all data types now be registered and made consistent, in the name of data quality, is completely justified but creates a further delay simply in making the new data available for analysis for the first time.
Shocking though it may sound, the reaction of users who feel they successfully are overcoming this barrier is simply to say, the overriding priority here is to make that data available now. Data governance and data-model entry are required to do this, and then accomplish their own tasks.
Interestingly, the net effect appears to be that data governance and database updaters up their game to meet the demand, and both data quality and database capabilities improve as a result, often more than they would have absent this approach. It seems a bit like “sink or swim” – but there’s a method to the madness.
Specifically, once the organization perceives data integration as a strategic need, it turns out there are many data integration tools out there that handle data discovery, global metadata management and integration (at some level) of the new data once the data types are taken care of. The outstanding example of this is integrated data visualization solutions from vendors such as Composite Software and Denodo, among others. However, the user can also put together the piece parts from separate tools or merge the effort with master data management processes. To sum up, “speed above all” is not only doable and preferable – it can convey side benefits even beyond Big Data.
The Top and Bottom Lines
The IBM conference user testimony has reinforced, yet again, my conclusion that Big Data is an exciting and valuable new strategy for IT, and that successful implementers will see large rewards. At the same time, some evidence suggests that many organizations are not yet seeing those rewards. Thus, the user testimony suggests that a major reason for lack of Big Data success is inability to break through those barriers – and yet those barriers can be overcome.
So let me repeat the gist of the ways some users have found to tackle the problem:
- Reskill by corporate and IT commitment to a Big Data strategy, followed by focus on the customer and customization
- Make your business case more about cost-effective customer opportunity, and less about business-process cost and risk
- Break through soft cultural and communications barriers with automation, especially aimed at data scientists and customer insights
- Handle data integration barriers by making analysis and decision-making speed the bottom line via data virtualization solutions and the like
Big Data is a Big Strategy. Big Barriers can be overcome. Go out and try ways like these to reap the Big Rewards.
Wayne Kernochan is the president of Infostructure Associates, an affiliate of Valley View Ventures that aims to identify ways for businesses to leverage information for innovation and competitive advantage. Wayne has been an IT industry analyst for 22 years. During that time, he has focused on analytics, databases, development tools and middleware, and ways to measure their effectiveness, such as TCO, ROI, and agility measures. He has worked for respected firms such as Yankee Group, Aberdeen Group and Illuminata, and has helped craft marketing strategies based on competitive intelligence for vendors ranging from Progress Software to IBM.
Wayne Kernochan has been an IT industry analyst and auther for over 15 years. He has been focusing on the most important information-related technologies as well as ways to measure their effectiveness over that period. He also has extensive research on the SMB, Big Data, BI, databases, development tools and data virtualization solutions. Wayne is a regular speaker at webinars and is a writer for many publications.