The Disingenuous Nature of VM-Based Pricing for Enterprise PaaS

By Atos Apprenda Support


I’ve always been fascinated by pricing models. These models can have a huge impact on the realized value of a good or service, and define a big portion of the risk associated with consuming a good or service. Pricing models can also put the customer and vendor at odds with each other, but, when done right, can also be amazing vehicles for driving win-win scenarios for vendors and their customers.

Given that enterprises are actively pursuing Platform as a Service (PaaS) evaluations and purchases, it’s no surprise that Apprenda is getting more inquiries than ever regarding our pricing methodology. Apprenda, at the technical level, is a distributed fabric that stitches together an arbitrary number of Linux and Windows OS instances into a single resource pool for deploying, running, and managing multi-tier apps. The capacity of an Apprenda instance is entirely defined by how many OS nodes are contributing compute, memory, and storage to that Apprenda instance. Our pricing aligns with this.

The cost of an Apprenda license is based on the amount of memory, in gigabytes, contributed by the OS in that fabric instance. For example, if a customer is running 10 nodes (some mix of Linux and Windows) with 24GB of RAM, they will secure a 240GB Apprenda subscription. If that customer had 20 nodes, each with 12GB, it’s still a 240GB license.

Why is our licensing this way? Years ago we used to license Apprenda based on number of nodes. (Pay attention that I’m not saying VMs. This will matter later in the post.) We learned that, in practice, counting only nodes was an awful model. It didn’t align well with the value we generated, and it certainly didn’t align well with what customers want to do with their IT strategy.

Let’s start with the value discussion. There are five important principles:

  1. Applications are typically memory bound. As such, quantifying an application footprint tends to be most straightforward if we use memory as a lowest common denominator.
  1. Whether an organization has lots of small apps, a few very big apps, or a mix of the two, memory tends to properly quantify the “size” of an application portfolio.
  1. A well-built, modern PaaS should be application centric. Any cost associated with a PaaS should closely mirror the application footprint, and thereby the value generated by that application portfolio. It should not be shaped by infrastructure decisions.
  1. Memory is “fair” in that it’s hard for both the vendor and the customer to cheat. Memory is hard to oversubscribe without serious negative consequences, thereby providing the stability one would expect from any good unit of measure.
  1. Memory-based pricing doesn’t influence infrastructure architecture in a negative way. Whether someone decides to have lots of small apps or a few big ones, the only thing counted is aggregate memory.

These five principles provide a reasonable foundation for why memory-based pricing makes sense and more closely reflects application value. But now let’s dig into something more controversial: the assertion that VM-based pricing burdens PaaS customers with unnecessary inflexibility and, in many cases, exists to propagate old technology in new packaging.

For any PaaS with a VM-based pricing model, the unit of measurement is the VM-unit. That is, for any VMs that are activated in support of the PaaS or guest applications, a customer pays a license fee. For example, if an enterprise were charged $7,000 per VM per year, and ran 30 VMs in support of the PaaS and its guest applications, that would result in a $210,000 per year aggregate license fee.

This model is problematic for a number of reasons:

  1. VM-based pricing is incongruent with the application-centric vision for PaaS. It’s levying a cost based not on value drivers (e.g. applications), but rather on raw infrastructure. This alone should invoke pause in anyone evaluating PaaS pricing.
  1. In this model, customers are not free to shape infrastructure-architecture decisions independently from PaaS licensing. For example, if an enterprise builds an initial private PaaS footprint with 20 VMs each with 24GB of RAM, and then decides that it can achieve lower resource contention and higher system stability by having 40 VMs with 12GB each, it has to pay the PaaS vendor for more VMs. Or, put another way, it’s a tax on making better design choices. This fact artificially and unfairly penalizes customers, and handcuffs IT strategy because of the economic penalties caused by the PaaS.
  1. Bare metal is not an option here. For many, the idea of a hypervisor sitting underneath a container-based PaaS is redundant. A well-built PaaS provides guest applications with portability across the infrastructure, high availability, etc. In this model, an enterprise is precluded from choosing an implementation where the PaaS sits atop Linux and Windows nodes that are sitting directly on the hardware.
  1. In this model, the complexity of planning for capacity and cost increases. Today’s PaaS offerings expose container-size options to developers. That is, as a developer, I’m presented options that map to the size the OS processes (not exactly, but close enough) running my app components, and not VM sizes. A single VM may act as host to multiple containers of different sizes. A PaaS packing strategy isn’t obvious, so as developers deploy more apps, how many VMs will I need? How does that map to cost? This layer of complexity obfuscates the situation, whereas memory-based pricing maps directly to the domain model that developers use to assign resources to their apps.

VM-based pricing puts the economics of PaaS at odds with the freedom and flexibility that customers need to execute a broad IT strategy. Simply put, vendors promoting a VM-based pricing model are either illogical or disingenuous, existing to drive revenue for virtualization-oriented vendors. By embracing VM-based pricing, these vendors simply aren’t focused on the best outcome for the customer but instead focused on how to more effectively pad their pocketbooks.

Atos Apprenda Support

View Comments
  1. TechYogJoshFebruary 13, 2015

    Its not uncommon to see vendors supporting their pricing models and I am sure each of you can give pros and cons about the others. This is one of the primary reasons why technology providers are infamous. They do not help the client but live in their make believe self pleasing world to prove superiority of their solutions. An application will eventually use a VM (or Docker) so why VM is a bad pricing is tough to understand. Moreover, a VM can be shut off and money can be saved, but yes for Apprenda it does not matter as you are not public PaaS but private PaaS and run on client’s infrastructure. You should debate whether any of private cloud principles (private cloud, private PaaS, private SaaS) are meaningful or just a deceit to fool the client?

  2. Sinclair SchullerFebruary 13, 2015

    Well, I think you sort of helped make the point. If your pricing model is tied to the number of VMs, then you certainly won’t be pricing on something like number of docker containers (which would increase density). Second, an application rarely uses the full capacity of a VM, and most OS’ are too “thick” and consume significant resources just for the OS. If you price per VM but share that VM across multiple apps, now it’s unclear how much of the VMs cost is associated with each app.

    I think you’re confusing on-premises/off-premises with private/public. Private is a visibility construct, not a location. We have customers who install Apprenda atop Azure IaaS, AWS EC2, but it’s a PaaS instance dedicated to them, hence “private.” What you’re referring to is on/off-premises. In many, many industries, there is tremendous complexity in consuming off-premises resources. I wrote about it a while back here:

    An enterprise will, long term, have a hybrid model for cloud. Some applications, for various reasons, will stay on-premises. Given that this is likely true, having on-premises resources operate as a true cloud operating model will increase density (applications per infrastructure units), reduce maangement burden (typical enterprises will have 1 app owner manage around 20 applications. A PaaS model will change that to 1:thousands), reduce architecture and configuration drift, improve availability, etc.

  3. KishorekumarApril 1, 2015

    What you have written is nothing new. You are talking about memory based pricing as opposed to unit of VM pricing. Is n’t that how you get charged by digitalocean, aws (hourly based on memory).

    I am parking public and private for the moment and will look at it an abstract way a customer should be charged.
    Heroku does a unit of VM pricing by calling the unit a dyno.

    Ofcourse in a PaaS model, you can do a VM based pricing, too if you can take the burden on the SLA (saying by paying x your app will withstand 10000 requests/hour, HA). GAE does it by saying you focus on your biz, and you can scale. That would be ideal way to go, it just hasn’t matured on that area.

    On the private side, Why can’t you still charge based on just usage, just like TECHYOGYOSH mentioned ? We are not worried how apprenda gets installed etc. But as a customer i care how much i used for that month. My 240GB of RAM can just sit there doing nothing, is it good to charge the customer for that month ?
    This approach is nothing new, we are going to the old days on when IBM Mainframe that runs in a customer premise gets charged on MIPS (million instruction per second) that a customer runs that month. A report is run and the customer gets charged only for the MIPS used each month.

    The pricing you mentioned is more like BEA Weblogic pricing of per server node..

    Just my few cents, by the way good article, We do PaaS (open source, and will launch a public cloud in india at on April 30, 2015. Feel free share your views.

  4. Sinclair SchullerApril 2, 2015

    There’s a lot to discuss here. First, yes and no to your question “Isn’t that how you get charged by digitalocean, aws (hourly based on memory)?” Most public IaaS vendors scale their VM instance hour pricing linearly with memory, but that’s not the same as pricing by memory. The unit of granularity in AWS’ case, for example, is an instance hour. If I have a small Windows web service that requires only 128MB of RAM, there is no option for me other than paying for the full overhead of a VM. So while memory is used to scale price, they do not price by memory.

    I’m very familiar with the dyno. Apprenda has a very similar concept known as a Resource Policy ( Given that we are a private PaaS, operators get to define Resource Policies and publish them to developers, who then associate them with their apps. In Apprenda’s case, we’re not a Heroku, but we equip an IT department to offer a Heroku like service to their developers. IT departments license Apprenda, using the dyno as a conceptual model doesn’t make sense because we don’t charge developers. The IT department we sell to might charge developers, and they have the ability to do so via Resource Policies.

    As for your question “Why can’t you still charge based on just usage?” when it comes to private, that doesn’t necessarily align with reality. Installing a building a large private PaaS service is non-trivial, and part of our cost to support a customer is their active PaaS environment. Even if no one is using the PaaS, we support the customer and our technology also enables them to offer a new service to their developers. That alone has value, and without Apprenda or some other PaaS vendor, the IT department has no service to offer it’s developers. Is there room for a blended model? Possibly. But it’s not obvious what the advantage is.

    I’d say that the model is relatively new. It is nothing like charging per node. Per node assumes a fixed definition of a node, and highly influences the infrastructure configuration a customer would choose. An application server license should minimize it’s influence on the underlying hardware. Per node and memory based are nothing alike.

Comments are closed.