There’s little question that where it comes to cloud, for most organizations, multi-cloud is already reality. According to Flexera’s latest 2020 State of the Cloud report, 93% of enterprises respondents reported having multi-cloud strategies. It’s been well-covered in these pages and elsewhere. In his assessment of Google Cloud’s strategy just published last week, ZDnet colleague Dion Hinchcliffe noted that multi-cloud is at the heart of the cloud challenger’s strategy. In our 2020 assessment of hybrid cloud infrastructure platforms, we noted that Google is hardly alone amongst cloud providers in seeking to expand their footprints into foreign territory.
So, why are so many organizations formally or informally embracing multi-cloud strategies? It could be one of many answers. The most cited answer is fear of cloud vendor lock-in, but hold that thought, because from what we’ve seen, inertia tops the list.
It’s nothing new for enterprises to have one of everything in their technology portfolios. Often, corporate IT sets a standard, but rarely are those standards followed religiously throughout the entire organization. There’s years of shadow IT, starting from the days that PCs made it in through the backdoor via departmental purchase orders, often in spite of, or in order to bypass IT backlogs or the bottlenecks of centralized procurement. And of course, there are organizations that are the product of M&A; it sometimes takes years for acquired units to migrate off their old systems.
So, it shouldn’t be surprising that when it comes to cloud adoption, that it’s just the latest manifestation of organizations having varied technology portfolios. Why should cloud adoption be any different? Given that in many organizations, cloud adoption began with departmental AppDev teams tactically running DevTest workloads because it was a lot more expedient than having to buy dedicated hardware. Then came the Cambrian explosion of mobile apps following rollout of the iPhone AppStore, which were often implemented by product marketing rather than corporate teams, and since then, growing adoption of SaaS and AutoML services that were only available in the cloud, not to mention operational applications and analytics running with data that only resided in the cloud.
In some cases, varying cloud choices might be be attributable to application preferences, such as Azure alliances such as SAP with its next-generation S/4HANA business applications; SAS with Viya analytics; or Databricks via an OEM agreement. Or, open source database providers that Google has made first-class citizens with its open source database partnership programs. In other cases, it might be for performance or data sovereignty reasons, if a specific cloud provider has a region that is physically located much closer to, or inside the same country, where the data is. However, this distinction is likely to be fleeting as each of the major cloud providers expand their global footprints to become ubiquitous across different geographies.
In some heavily regulated sectors, such as financial services, there may actually be requirements for avoiding dependence on any single cloud provider specifying that a second cloud provider be utilized for disaster recovery purposes.
There are few certainties in today’s economy, but it is clear that the current pandemic is accelerating existing trends toward more cloud adoption. With the economy in upheaval, most enterprises are reassessing their core products and services given the shift toward digital business. That dictates more attention to the business minus the distractions of keeping the lights on, which is where the cloud comes in. That means a renewed look at whether those back-end systems resistant to cloud migration are finally going to move. It also means taking advantage of cloud services, such as in machine learning, customer engagement, and analytics that organizations can use to jumpstart new lines of business.
In most cases, enterprises will likely run specific systems in specific clouds. For instance, they might run some mobile apps in AWS, the CRM system in Azure, then look to Google Cloud for some of its AI capabilities. Or the delineation of clouds may be driven by business unit.
But what about another scenario? Are enterprises likely to run a single application or database across multiple cloud providers? In most cases, we’re pretty dubious. The challenges include egress costs (charges for extracting data out of a cloud); network latency; cloud vendor-specific APIs; and security and management silos that counteract one of the strongest draws of the cloud: the opportunity for a simplified, uniform control plane. Even if egress costs are factored out (we could imagine that providers of multi-cloud services could get creative here), you still have the security, management, and integration overhead of running across multiple clouds.
In essence, running a single logical instance of a database across multiple cloud providers is akin to running a single logical instance of a transaction application across multiple databases. Maybe not impossible, but would it be advisable?
For most organizations that adhere to running systems in single clouds, there will still be this variance in control planes, but at least, different clouds can be treated individually just like you would treat different database platforms across your enterprise. You won’t necessarily run them together, so you won’t have that management complexity.
But then, there’s Kubernetes (K8s). Could its emergence provide that elusive control plane uniformity across different cloud providers so that everything you run in the cloud will look and act the same, regardless of cloud provider?
The draw of K8s is clearly portability of applications rather than databases. Nonetheless, databases can be wrapped into containers and/or they can provide operators that call ancillary processes, such as authentication or metrics gathering, that are containerized and run as microservices that could be marshaled in a K8s environment. And with K8s, the APIs are harmonized.
While K8s can enable cloud databases to interoperate across clouds, it doesn’t address the management and security complexity of running across discrete environments, each with their authorization and authentication; logging, monitoring, and metrics; and encryption regimes. Yes, security can be declarative with K8s, and K8s provides interoperability across clouds. But it doesn’t necessarily harmonize the underlying control planes unless you mix and match operators under the hood. If you want the management simplicity of the cloud across multiple clouds, the choice will inevitably be layering on a third-party tool or framework.
So, what to make of Google’s positioning itself as the most multi-cloud friendly cloud provider? Google is promoting Anthos as the pillar of this strategy. Beyond Google Cloud, Anthos now runs in AWS, with Azure on the way. Anthos repackages Google Kubernetes Engine (GKE) and related components for cluster and multi-cluster management, configuration management, service mesh, logging and other functions that are required for running a cloud-native environment. That should enable your applications and databases to run either in your own private cloud, or as an instance in a rival cloud. It has just announced some extensions this week that accommodate existing identity and access management systems, making it more Google Cloud-independent, and is in beta for a “Bare Metal” option that will allow Anthos customers to move off VMware.
Google is not the only one in this game, as IBM is also aggressively pushing the portability of Red Hat OpenShift (on which the Cloud Paks are built), and while nothing has been announced, we could well expect Microsoft to make Azure Arc, if implemented with a Kubernetes control plane, similarly portable. And Confluent, with its 6.0 platform, is introducing cluster linking so you can connect Kafka clusters across multiple data centers, geographies, and its Confluent Cloud service will allow you to operate a virtual Kafka cloud spanning multiple clouds.
Doubling down on multi-cloud, Google has just released BigQuery Omni, so you can run Google’s data warehouse anywhere that Anthos runs, whether that be in your own private or hybrid cloud, in the data center or on the edge, or in another public cloud. The core notion of BigQuery Omni is you can run it locally where the data is, without having to move data back to the Google cloud. But it could also conceivably let you run a single federated implementation of the data warehouse across all clouds that Anthos supports. It’s Google’s approach to pushing down analytics to where the data resides, with the operable assumption that if any data is sent back to the home base on Google Cloud, it would only be the result sets.
On a recent analyst call, a BigQuery customer that is considering adding Omni is viewing it as more of an edge analytics device that would run in a hybrid cloud on site, then feed results back to the central implementation running on GCP.
To us, the myth of multi-cloud is the expectation that they can look like and be run as a single logical entity. The truth is that clouds are platforms, and even with standards, they will still have their differences – much like SQL databases. The truth about multi-cloud is that it will be a way for enterprises to spread their bets.
So, in the overwhelming set of cases, multi-cloud won’t be about running federated databases or applications across two or more clouds. Instead, multi-cloud strategy will be about freedom of cloud choice – what will run, where. While there is always the unicorn that makes public pronouncements of their choice of a single strategic cloud provider (and these are usually reference customers who get celebrity treatment), our take is that will be the exception.
Unless your IT organization conducts the technology equivalent of living off the grid, making everything homegrown above the operating system layer, you will have to make a critical path vendor choice at some level of the stack. Ultimately that’s going to encompass one or, more likely, multiple clouds.