Blog

Cloud and the ‘Existing X’ Dilemma

Avatar

By Atos Apprenda Support

Cloud X

Anyone designing an IT strategy that incorporates cloud is faced with the question of what to do with existing assets including apps, infrastructure IT systems, processes, skills, etc. Some will tell you that the only way to handle the “existing X” parts of your strategy is to forget about them and allow those investments to languish. They’ll tell you they’re incompatible with a cloud world and operating model. Let that sink in. You have potentially billions of dollars invested in your current IT infrastructure, thousands of applications generating billions of dollars of bottom-line impact, and thousands of employees who’ve never touched anything cloud-related or public.

When you ask proponents of the “all assets left behind” strategy the question “How do I pull this off?”, they’ll hand wave and say “just look at Netflix’s architecture” five times in a single meeting, and then try to sell you a platform and consulting services to migrate your existing apps to their stack for 10 times what you spent on software licensing for that platform.

Oh, and they’ll use the word “legacy” to refer to applications even written last year to deposition your investments in a friendly-yet-inclusive manner despite the fact that these “legacy” apps power the payroll system that cuts the check to pay them to say that. That’s not an answer for companies looking to solve this “existing X” dilemma. That’s a services revenue stream for vendors and a way to drive unparalleled stickiness to their tech stack.

The actual answer is to embrace existing parts of your IT stack and app portfolio in a productive and profitable way as a means to bridge to a future “cloud-native world.” Let me be clear: I believe the future is powered by cloud-native app architectures (and Netflix has done some amazing work to show what that looks like), but you won’t get there if you can’t use what you have as a platform to drive cultural change.

Before explaining in context, let me explain via analogy. Some of the first proposals for autonomous vehicles were designs where the vehicles were outfitted with sensors and transponders that would communicate with electronic counterparts embedded in roadways. By communicating with these “active” roadways, the vehicles would be able to properly guide themselves in the safest way possible given that there was clear delineation of boundaries such as roadways, lanes, and direction.

Additionally, in a closed system where all the vehicles were autonomous and homogenous in design, the vehicles could communicate with each other to ensure proper coordination. It was safe, it made sense, and was a “native” approach since the roads and cars were designed to work together in harmony. It was also highly impractical.

In order to deploy this native approach, we’d be faced with two options:

  1. Refit all roads with transponders and sensors that the vehicles could communicate with, but do so in a fashion where current vehicles (or as we’d label in IT, our “legacy” cars) that aren’t autonomous could function without damaging the equipment required for autonomous needs.
  2. Build a parallel set of roadways to the existing roadways. Existing roads are for non-autonomous use, while the new roads are for autonomous vehicles.

Guess what option the world chose? Neither of these. Why? Because of their impracticality. The cost of the aforementioned options would be astronomical. Today, the U.S. alone has 5.3 billion miles of road! Even at a fully burdened cost of $1,000 per mile, that would cost $5.3 trillion (mind you, a normal two-lane road comes it at $100,000 per mile, and one mile of a four-lane highways is $1 million per mile)! Oh, and not to mention the herculean effort required to accomplish this migration.

The world chose to adopt autonomous vehicles that use sophisticated technology to drive on existing infrastructure alongside non-autonomous vehicles. These vehicles use computer vision, machine learning, and other techniques to adapt. Modern autonomous vehicles embrace existing everything and manage to succeed despite heterogeneity. It works because it’s practical.

Let’s go back to IT. In my years at Apprenda, I haven’t met many corporate executives who believe they can either:

  1. Refit their entire IT portfolio – infrastructure and apps – to match a pure cloud end state.
  2. Isolate their existing investments and build a totally new stack that has no interaction with the tech or processes in the existing investment.

It’s no surprise as to why: it’s impractical and unnecessary. At Apprenda, we’ve invested a tremendous amount of energy in ensuring that our platform can run on existing infrastructure (including those “legacy” OSes like Windows and RHEL) and that the platform can welcome as many existing apps from your portfolio as possible. We call this practicality. It means you can start the journey to cloud by operationalizing what you have to generate ROI, while laying the foundation for cloud native that will power the future of your business and IT strategy.

We’re not alone in this belief. Earlier this year, we made a strategic bet on Kubernetes as one of Apprenda technological underpinnings. We then bought a company to further this goal. The reason we did this is we saw an architecture in Kubernetes that is consistent with this belief that “existing X” matters. The community behind Kubernetes is following through with this by creating new capabilities like PetSets and real Windows support.

If you’re thinking about cloud and thinking that the only option is cloud native, you’ll find yourself stuck in a dilemma underscored by impracticality. Let’s learn from autonomous vehicles: the burden of supporting “existing X” belongs with us, the technology vendor, and shouldn’t be dismissed as a necessary compromise in your IT strategy.

Avatar
Atos Apprenda Support