Enterprises are evaluating private PaaS on a few dimensions. Obvious evaluation vectors tend to be focused on feature sets, extensibility, & credibility. Another that comes up is “lock-in.” One common sentiment, at least among OSS PaaS vendors, is that OSS somehow removes / reduces lock-in. Not only is it not true that OSS removes lock-in, but frankly, it distracts attention from the more important question of how to evaluate and measure lock-in. Unfortunately, most vendors, especially those masquerading as OSS, are selling customers a bunch of LIES – Locked-In Enterprise Stacks. I think it’s important that enterprises considering private PaaS ask the right questions.
No More LIES!
The reason I’m even writing this is that I find that customers are asking a TON of questions related to independence – not OSS – but independence and freedom from infrastructure and other dependencies. I recently wrote a post about how traditional platform vendors are getting into the PaaS game, but are letting their old middleware businesses dictate how they approach modern product strategy. I also wrote about how PaaS won’t be folded into IaaS. Each of these posts is essentially a discussion on loss of freedom in building a PaaS-driven datacenter environment.
This is the reason why, when I see things like Pivotal CF™ advertising “infrastructure independence” in their open source project while their commercial project has a hardwired VMware dependency, I cringe. It’s a “wave one hand, sucker punch with the other” strategy that is built to drive more sales of other components in their stack and, as discussed in the other posts I linked to, a natural consequence of letting legacy products drive future strategy.
Time For Some (Hopefully) Educated Discussion On The Topic
Let’s start by really understanding the concept of lock-in in the context of evaluating enterprise PaaS technologies, and what it means to an enterprise buyer’s PaaS strategy. A good framework to define lock-in dimension is to think of the IT supply chain and how lock-in risk impacts that supply chain.
Although the concepts of Supply Chain Risk Management (SCRM) typically apply to supply chains in physical goods markets, let’s not fool ourselves: IT departments manage very sophisticated supply chains to create IT services offerings for their customers (business end users, developers, etc.). Part of the IT supply chain is standard application stacks (mainframe, Java, .NET, LAMP, etc.).
Each of these stacks has a sequence of components that go from the bottom of the stack up to the application with all supporting tiers in-between. If a stack is rigid, meaning components in that stack cannot be provided for by alternate sources, there is an increase in risk. The risk profile of that stack is partly dictated by the ability to alternatively source components within the stack (e.g., if I have tons of WAR files, I could decide to dump WebSphere and replace it with Tomcat but keep my apps and hardware the same, which is a lower risk stack than one where the app server and the hardware are tied). If you are tied to a specific component in the stack and you can’t replace it with another (e.g., replace WebSphere with Tomcat), you have been hurt by lateral risk. If you have a scenario where you can’t replace part of the stack without replacing at least one other component, you have vertical risk, which is essentially a consequence of buying component A, and having downstream dependencies that are now locked-in as well.
So, Where Does OSS Come Into This?
One of the primary reasons I see enterprise buyers care about OSS in practice is as a means to mitigate risk. Essentially, open source projects are perceived to carry less risk than their proprietary counterparts in that there is a non-commercial entity tied to the core of the technology that has “durability.” Second, open source means that more than one vendor will likely adopt the project at the core of a commercial offering, ensuring some sort of lateral compatibility – or at least that was supposed to happen. This tends to be the goal of most enterprises looking at open source – reduce lateral risk by increasing lateral movement across vendors.
The idea is that if multiple vendors use the same project DNA, they have the ability to swap due to some inherent compatibility. It’s the reason that companies like Pivotal CF™ manage open source projects like Cloud Foundry and establish partnerships with other vendors – it becomes a selling point for them. Unfortunately, it doesn’t work in practice that well.
It’s the reason why enterprises get all bent out of shape about Red Hat binary compatibility. Even within its own family tree (wow!), binary compatibility is nowhere near guaranteed. The primary benefactor here is Red Hat, not the customer. Red Hat leverages this to show how “pervasive” their community is. Don’t get me wrong, managing lateral risk is real and important, and proprietary vendors are at a deficit here, but the techniques used by supposed “open” vendors aren’t genuinely helping the customer address this risk. While lateral risk is real and is a consequence of lock-in at a single component of the supply chain, it can be remedied by adopting things like API standards and adhering to a standards surface area.
The real risk for lock-in is when a technology requires a chain of specific dependencies that can create full vertical lock-in and risk. Pivotal’s reliance on VMware spreads Locked-In Enterprise Stacks (LIES). OpenShift’s reliance on Red Hat OpenStack for even subpar high availability also spreads Locked-In Enterprise Stacks (LIES).
When it comes to managing risk for PaaS
I think enterprises should care more about full stack independence to manage vertical risk. Why? When a technology is vertically prescriptive (i.e., Pivotal CF™ One requiring VMware), it hurts an organizations’ ability to manage supply chain risk by ensuring they can source vendors to fulfill needs at other parts in the stack. This means that a customer loses cost control leverage, feature leverage, and can be held hostage in the face of overall stack quality degradation. By optimizing for vertical independence, an enterprise ensures that vendor selection at any tier in the IT stack does not dictate what vendors they use above and below that part of the stack.
This is no different than managing risk in a supply chain. If a component of your supply chain essentially locks you into a rigid supply model, you are beholden to a 3rd party to shape your supply strategy. Imagine, for example, that buying something like WebSphere required IBM servers, a specialized IBM OS and a specific HD array. That wouldn’t be so good, now would it? This is exactly the problem in the PaaS market when vendors like Pivotal CF™ (who is essentially an arm of market Goliath EMC) require VMware, but it’s even worse when they use their community project as a means to market “independence.” This claim of independence is actually just more LIES.
A customer should be extremely cautious when it comes to on-premises PaaS selection
For vertical independence, enterprises should require that vendors demonstrate true architectural independence AND have no political or economic bias toward forcing the use of other vendors. Lateral lock-in should be mitigated by ensuring that basic PaaS value is transparent to the guest application AND that the PaaS, when standards emerge, adheres to standard PaaS and guest app APIs. A good tool to evaluate risk is to plot these dimensions in a matrix and assess private PaaS software based on the interplay of those dimensions:
By evaluating a PaaS stack through this lens, enterprises can have a normalized definition and view of what “lock-in” means so they can avoid LIES and pursue an IT future that looks nothing like their locked-in IT past.