Six months after AWS’s announcement of a rival service, it’s clear that MongoDB’s Atlas managed cloud service maintains real momentum. While it’s a bit early to judge the impact of AWS’s service, MongoDB’s latest quarterly results, announced a few weeks back, showed Atlas now accounting for 35% of overall revenues, and up 340% year over year. While a good chunk of this came through the mLab acquisition last fall that added a long tail of thousands of smaller-scale self-service users, organic growth accounted for the lion’s share of revenue.
Growth is not a stranger to Mongo. The company, has shared a similar trajectory with players like MySQL, and before that, SQL Server, that began with developers. SQL Server was viewed as highly accessible to VB developers, while MySQL took that the next step with open source as part of the LAMP stack of the early 2000s. 2018 was a year of growing pains for MongoDB as it encountered challenges stemming from its very success: imitation, not as a form of flattery, but a diversion from its growth path – and potentially, its developer base. MongoDB is not the only one that changed its open source license, but almost a year after making its own changes, Atlas is continuing to redefine the company.
Stephanie Condon provided a comprehensive blow-by-blow of the announcements that streamed from its annual MongoDB World conference last week. The laundry list encompassed full text search, direct query to cloud storage, autoscaling for the Atlas service, plans to integrate Realm with Stitch, field-level encryption, and multi-document ACID transactions, among others.
MongoDB, like most database providers, is pivoting to a cloud-first strategy where new features get introduced to the managed service before they turn up in the on-premises edition. The company is currently maintaining about a 3-week cadence for cloud updates, while keeping on-premises to annual major refreshes. There will be features that are implemented differently in the cloud. For instance, as MongoDB integrates Kubernetes operators to make its service more portable across clouds, it plans to add support in its management products including Cloud Manager and Ops Manager, for cloud and on premise deployment, respectively. But in the Atlas service, the experience will be more guided with a higher level, declarative approach.
The fallout from last year’s change of licensing to the Server Side Public License (SSPL) that was designed to discourage third-party providers from commercializing the community edition of MongoDB is that there will be, in effect, three tiers of MongoDB services in the cloud. It won’t be simply a matter of MongoDB vs. all third party clouds.
There will be the emulation approach that uses a MongoDB 3.6-compatible API (that predates the current 4.x generation) on different storage engines, such as those used by AWS DocumentDB and Microsoft Azure Cosmos DB. Providers of these services point to the 80/20 rule, claiming that they offer the most common functions used by MongoDB developers. MongoDB counters that the technology base for these services will grow increasingly dated; the likelihood is that new features that grow popular will get proprietary equivalents.
But there are also third-party cloud providers that will offer the latest MongoDB core platform features. They are second tier cloud providers, such as IBM and SAP, that offer fully licensed commercial versions of MongoDB. These cloud providers can offer MongoDB as a managed service, but it won’t be Atlas. The latter is only being supported by MongoDB on AWS, Microsoft Azure, and Google Cloud platforms. Besides having the MongoDB brand, there will be additional services that are only available on Atlas such as MongoDB Data Lake (which, despite its name, is not a managed data lake per se, but instead, a direct query service to data sitting in cloud object storage, which is the de facto data lake).
But as we noted last year, for MongoDB to get taken seriously as an enterprise database, it had to pay attention to some of the less sexy stuff. Early on, it was about the aggregation framework that makes queries more versatile with filtering operations for grouping entities. More recently, it has been about transaction processing, which has been a journey since the phasing in of the Wired Tiger storage engine. The most recent pieces, including multi-document transactions that debuted with the 4.0 version, followed by last week’s announcement of distributed transactions that are coming in the 4.2 release, are pieces in the puzzle for getting MongoDB considered for more mission-critical applications.
And that led to the true sleeper for us. A large property and casualty insurer had been seeking to re-platform its mainframe-based policy management systems. Back at the turn of the millennium, conventional wisdom was that relational databases would be the end state for the enterprise databases. But a number of mainframe data stores proved resistant because fitting their complex, hierarchical, or networked data would have made the relational transition a square-peg-in-round-hole problem. And so, in banking and finance, many of these legacy databases continue to survive, and IBM keeps putting out new mainframes. But this insurer viewed the JSON document model as a way to skip the relational generation, and then re-platformed the core policy management system onto MongoDB.
We’ve associated MongoDB with operational systems for digital online applications. But for legacy modernization? Who knew?