Last week, Big on Data bro Andrew Brust delivered an encyclopedic roundup of what data and analytics technology providers are predicting for the year ahead. Now it’s our turn to pick up where he left off, and we’re going to drill down behind the headlines and spotlight several sleeper themes that will be driving the agenda for data and analytics in the year ahead.
We’ve got a lot to talk about, and because of that, our annual look-ahead post will appear in two parts. Today, we’re looking at how the dialog around cloud computing will start changing this year. Tomorrow, we’ll shift our attention to a couple emerging issues that will shape and constrain data and analytics: the quest to make AI explainable and an emerging debate on a baseline of data management on the merits of multi-model vs. specialized databases.
Generational change in back office systems is the next phase in cloud adoption
As shown in the diagram, we’ve traced the stages of modern cloud adoption, from its embrace by developers in the early days, to the Cambrian explosion of mobile apps (and the need for a place to deploy them) after opening of the iPhone AppStore in 2008. Today, enterprises are opportunistically embracing specialized analytic, AI, and SaaS applications. So, what’s next? Organizations are looking over their shoulders at the next major turning point for core enterprise applications.
Make no mistake about it, the cloud transition will prove just as disruptive to back end enterprise systems as the run-up to Y2K (which put most of these systems in place) previously did. The trigger is that SAP and other enterprise systems vendors have started the countdown clocks to end of life on these 1990s-era applications. The news didn’t just suddenly break last year. SAP announced year 2025 EOL support for ECC, the generation of ERP system that rooted back to R/3, back in 2014. But the turn of the new decade and a hard deadline barely five years off has a way of concentrating the mind.
Typically, these are the last applications that businesses want to touch because of the risk of disruption – akin to pain without the gain. It’s no wonder that over the past 20 years, apps targeting hot button issues like customer retention or cybersecurity have been higher up the priority list.
Not all of the EOL announcements are necessarily forcing marches to the cloud. But the upcoming decision point is prompting many enterprises to confront the question of whether their core back-end transaction systems belong in the cloud.
Cloud deployment won’t be a binary decision
What’s the allure of the cloud? In our discussions with enterprises, we’ve found that there is clearly an appetite for the operational simplicity, flexibility, agility, and fast time-to-benefit that cloud-native deployment can bring.
We’ve characterized the issue as a chicken-and-egg scenario: enterprises expect their IT organizations to deliver services as readily and efficiently as cloud providers, while IT organizations struggling to keep the lights on are looking for the secret sauce that can help them become as efficient and responsive as cloud providers.
And that’s where the choice between continuing on-premise deployment, moving to a public cloud, or adopting some in-between private or hybrid cloud path becomes the issue. The choice depends on the role that the organization wants IT and/or the cloud provider to play in managing and running the systems, and where the data and apps should physically reside.
The explosive growth of the public cloud attests to its viability for many enterprise use cases. But for many organizations, there may be limits to whether they can or are willing to entrust their back-office heartbeat systems to the public cloud. Use of the public cloud might be suitable only under certain conditions. Or in many cases, the public cloud may not be practical. For instance, emerging data sovereignty laws are increasingly restricting where data can physically be stored. Given that the most expansive cloud network “only” spans 55 regions today, that means that only a minority of the nearly 200 independent countries in the world have public cloud data centers located within their territory. That will be a show-stopper for countries requiring data to be stored inside their borders.
As necessity is the mother of invention, there’s good reason why 2019 was the year that each major cloud provider announced their hybrid strategies. The necessity was the need for cloud convenience; the invention was the emergence of Kubernetes, which as Andrew pointed out, has made cloud mobility possible because it provides the scaffolding for making all those containers come alive.
One size of hybrid cloud won’t fit all
What’s significant is that there is no single form of hybrid cloud. A good reason for differentiation is the issue of single-cloud vs. multi-cloud strategy. In most cases, enterprises are multi-cloud, not by strategy, but by default. Just as most enterprises often have one of everything scattered around their IT portfolios, the same scenario is already repeating itself with cloud. Most will use different clouds, with the decisions more often than not being driven at business unit or department level. And the choice of whether to officially commit to a single cloud or hedge bets over multiple clouds is one major factor that will drive the choice of which hybrid cloud platform(s) to adopt.
Amazon and Oracle hybrid offerings typically involve the cloud provider dropping their own hardware in your data center and managing it. There are OEM approaches using third party certified hardware, and then there are software-centric approaches from Google and Microsoft, where either your IT department or a third party consulting firm handles the management. Then there’s IBM, which placed a $34 billion bet on Red Hat’s Open Shift that is supposed to be cloud-agnostic, plus other approaches from hyperconverged infrastructure vendors like Dell, HPE, and Nutanix.
With this smorgasbord of public, private, and hybrid cloud options, what’s the common thread? It’s the operational simplicity of the cloud control plane, where resources are virtualized into generic building blocks and marshalled on demand to deliver compute. It’s the cloud control plane that enterprises are demanding and that’s going to change IT decision making.
Enter the Hybrid Default
Let’s cut to the chase. In the 2020s, we expect that the process of making system deployment decisions to start reversing.
Excluding those “born in the cloud” apps such as mobile or IoT, until now the core assumption for deploying enterprise systems is that they would run on premises. Then the question would come up on whether cloud deployment would be feasible, and if so, cloud deployment would have to be justified.
But with enterprises facing forks in the road regarding their core back end systems, decision making will change. Instead, the starting assumption for new or upgraded apps will be for the operational simplicity of cloud deployment. We don’t expect that 100% of all data and applications will move to the cloud, but again as noted, the starting point will change.
The first step will be choosing where and how to use cloud-native deployment based on whether the data must stay on premises or not. Thanks to the growing variety of hybrid cloud options, the choice of cloud deployment will not be binary. For the purpose of discussion, we’re referring to “hybrid” as being the superset of public and/or private cloud deployment.
Then the next decision point will be how to manage it – what role will IT take, and what role does it expect the cloud providers to play? Will the cloud technology provider take all the management reins or simply supply the technology? Or will there be some in-between option? Or will enterprise still follow the traditional route of on-premises deployment, where it procures, deploys, and manages the entire works.
Here’s the punchline. In the 2020s, we believe that traditional on-premises deployment, not cloud, will require the justification. Welcome to the new world of the Hybrid Default.