Trends in analytics take time to develop.
Four years ago, at the dawn of 2019, some industry experts predicted that augmented intelligence would essentially take over business intelligence in the ensuing 12 months. Since then, augmented intelligence capabilities have indeed advanced, but the evolution of features such as natural language processing (NLP) and process automation has been slow.
In fact, NLP is once again one of the trends industry experts are predicting will gain momentum, suggesting that as 2023 begins, it still hasn’t reached the level of functionality that was expected years ago.
Similarly, a cloud takeover has long been expected. But the reality is that cloud migration is complex and costly, and while organizations are indeed moving their data and analytics to the cloud, it’s not happening all at once.
Trends in analytics sometimes just get overrated. Or at least their time is further off than anticipated. And that might be the case with data mesh and automated machine learning, according to some experts.
Conversely, other trends in analytics get underrated. In particular, data governance is often overlooked. It’s a critical element of a successful analytics program, but it doesn’t have nearly the same flash as AI or machine learning, so it doesn’t get talked about as much.
Data mesh is a decentralized approach to analytics. It first started getting talked about in 2019; Zhamak Dehghani, director of emerging technologies at Thoughtworks, is generally credited with its origin.
Most organizations implement a centralized analytics architecture with its data managed by a department of data experts who integrate their organization’s data, build required and requested data products such as models and dashboards, and parse out data and data products as needed by decision-makers.
Data mesh removes the ownership of an organization’s data from a centralized team and puts it in the hands of domain experts.
Domains are essentially departments — finance, for example. Rather than load finance data into a centralized database to be managed by centralized data experts, under a data mesh approach, the finance department has its own data repository — connected to the data repositories of other departments with a data catalog to avoid data silos — and maintains its own dedicated team of overseers within finance. Those domain experts within finance then work with the rest of the finance department to build finance-specific data products and analyze finance data.
The supposed benefits are twofold.
First, the supposition is that data experts who also specialize in finance will be able to come to better insights about financial data than general data experts. In addition, because they work closely with business analysts within finance, the domain experts presumably will be able to better teach those business analysts to interpret data on their own than would an organization-wide data literacy program, also leading to better insights.
Second, by removing data oversight from a centralized team, the bottlenecks — and subsequent long lag times for building data products and analyzing data — that often result from a centralized approach will be drastically reduced.
As a result of its potential benefits, data mesh is seen as a rising trend in analytics.
Dan SommerSenior director and global market intelligence lead, Qlik
But it’s a bit overrated, at least at the start of 2023, according to Dan Sommer, senior director and global market intelligence lead at Qlik. The problem some organizations curious about the approach are encountering is that data mesh doesn’t pay enough heed to the importance of data governance, which is more easily developed, implemented and overseen by a centralized team, he noted.
“I think there was a lot of kicking the tires and trying the data mesh approach in 2021, but speaking to executives, I think organizations are looking at — especially with everything that went on globally — a focus on governance,” Sommer said.
Many organizations that had not yet invested in analytics or launched analytics platforms before the COVID-19 pandemic did so once the pandemic started, he said. They realized the need for data-informed decision-making in order to deal with unprecedented uncertainty. That continued as other worldwide events including global supply chain disruptions, the war in Ukraine and an economic downturn all added more uncertainty.
In their rush, however, data governance was somewhat ignored. Only now are organizations adding in more complete data governance frameworks.
“Data mesh … holds a lot of promise, but we haven’t gotten there yet with a universal [governance] artifact that can connect it all,” Sommer said.
In particular, data mesh needs a better approach to metadata management, according to Sommer.
“That’s why the hype has subsided,” he added.
Other overrated trends
But data mesh isn’t the only overrated trend in analytics.
Another one is automated machine learning (AutoML), according to David Menninger, an analyst at Ventana Research.
Machine learning is a critical part of data science.
Data science models are designed to help organizations predict the future. When developing machine learning models, data scientists program the models with ML capabilities so that the models can detect trends and predict future outcomes without humans needing to interpret all the data. In addition, machine learning enables models to get smarter as more data is collected and integrated into the models.
AutoML removes much of the programming burden from data scientists by automating aspects of the development and maintenance of machine learning models. Its primary goal is to save data scientists from the time-consuming and mundane tasks of training, retraining and constantly updating the models to ensure they’re using the most current data to inform results.
In addition, when no-code capabilities are added to AutoML, business users are able to build their own data models, removing even more of the burdens often placed on data scientists.
But AutoML has shortcomings, according to Menninger.
While useful, it hasn’t yet reached the point where it can enable the construction of sophisticated data models that can do deep data analysis on the level of models built and trained by data scientists.
“AutoML is valuable, but not yet at the point where it’s going to replace data scientists,” Menninger said.
Like other augmented intelligence capabilities once predicted to replace humans, the reality of AutoML is that it’s best viewed as a capability to aid humans and make them more efficient. It can indeed take on some simple analysis all on its own, according to Menninger. And data scientists can use it to do some of the work they otherwise would have to do manually.
“We need AutoML,” he said. “It’s helpful to enable relatively simple data science analyses, and it’s also helpful to make data scientists more productive. But I think perhaps the market expectations for AutoML are higher than that.”
Similarly, natural language search is a useful capability, but a somewhat overrated trend in analytics, according to Krishna Roy, an analyst at 451 Research.
When ThoughtSpot first emerged from stealth in 2014, its platform was centered around natural language search. Its tools enabled users of all skill levels to simply type a query into a search bar and interact with data as though they were interacting with Google.
Since then, the vendor has advanced its search capabilities with more sophisticated background capabilities that expand the vocabulary of the platform, better understand different phrasings and comprehend more languages.
But while ThoughtSpot has developed a well-regarded platform centered around search, other vendors have not. They have some natural language query and generation capabilities, but it takes more than just a search bar to enable sophisticated analysis. Even ThoughtSpot has expanded its suite beyond search and now offers a more full-featured platform.
“I think search has been overhyped for years,” Roy said.
She noted that one vendor does search well — though declined to name the vendor — and added that for a variety of reasons, it remains difficult to make natural language search a primary means of interacting with data.
New chatbot technologies such as ChatGPT could speed the process of making natural language query and generation more robust, but despite all its hype, ChatGPT is only in its first iteration and has shortcomings.
“It’s very hard to provide a good, all-encompassing search capability within an analytics platform, even though search has been posited for years as a way to make business intelligence more pervasive by lowering the skills gap,” Roy said.
Not enough attention is paid to scenario planning, according to Menninger.
Analytics platforms are good at showing what already happened. And they’re good at predicting what will happen if conditions remain relatively stable. But they’re not good at running through a multitude of scenarios and demonstrating what will happen if one scenario plays out versus another.
Instead, analytics vendors have largely left scenario planning — the asking of the question, “What if … ?” — to vendors such as Anaplan that specialize in scenario planning.
Amid economic uncertainty, however, scenario planning is vital.
It’s what can enable organizations to prepare should a new COVID-19 variant arise that can evade vaccines and require severe responses like those implemented in 2020. It can enable organizations to understand how to react if different aspects of their supply chain are disrupted. It can enable organizations to know what to do if the economic downturn deepens or abates.
It can also enable organizations to understand the implications of small decisions such as when to add a new employee.
It is therefore a vastly underrated trend in analytics.
Some analytics vendors, including Oracle and IBM, provide scenario planning as part of their platforms, but not enough, according to Menninger.
“You need a driver-based planning tool to evaluate the implications of different decisions such as whether to hire 10 people or whether to hire 20 people — what’s that going to cost, how quickly will they be productive, what it means in terms of a whole bunch of other factors,” he said.
The rise of decision intelligence could help spread the inclusion of scenario planning by vendors and its use by analytics consumers, Menninger continued.
Decision intelligence is the use of AI and ML to surface insights and augment human decision-making. And among those insights are the results of different scenarios. Now, vendors including Pyramid Analytics, Tellius and Sisu Data are making decision intelligence the focus of their platforms.
“One of the key elements of decision intelligence is planning,” Menninger said. “I think planning is still underrated, and it will feed into decision intelligence.”
Also an underrated trend in analytics, despite its critical importance, is data governance, according to Sommer.
Data governance isn’t splashy like no-code tools or AI. Instead, it’s essentially an organization’s rules and regulations related to its data.
Those rules and regulations, however, are what enable an organization’s employees to work with data, simultaneously being restricted in certain ways to ensure regulatory compliance while also being empowered to explore data with confidence and reach insights that can help their organization’s business.
And many enterprises are behind in implementing a data governance framework, Sommer noted.
Many of those that bought and implemented an analytics platform when the pandemic started — or in the time since — did so out of desperation. As a result, they prioritized getting up and running over security and governance.
Now, they need to catch up.
“There’s a little bit of a [governance] debt going on where organizations invested in a lot of technology and a lot of data, and they need to play catch-up on unsexy things like privacy and security and governance,” Sommer said. “They [need to] have inertia and not be afraid, but at the same time they need to not overstep any regulatory bounds, so that would be underrated.”