Blog co-authored by Cesar Obediente, Principal Systems Engineer, Cisco
In a world buzzing with micro-services, containers and distributed cloud platforms, most organizations are faced with a daunting challenge: entering the promised land of shiny cloud-native apps and rapid application deployments at scale without compromising security and compliance. The good news is that there is an increasing number of technologies that can help, in this regard; the bad news is that most of them apply only to new, cloud-native applications.
Objectively speaking, microservices remain the primary target of all major container solutions. This type of application, incidentally, is often a minority within the portfolios of established companies. The bulk of applications fueling business today were created several years ago following architectures that are not conducive to rapid cloud deployments. Many of these apps won’t even “fit” within any common container type.
With the exception of new start-ups, who have the inherent luxury of starting fresh, most technologists live in a schizophrenic world. It is the world with an increasing, yet small, footprint of modern and portable apps that may not yet bring in any revenue. At the same time, this world is dominated by the legacy of the past – mature production apps that are not as pretty and sexy but, nonetheless, remain revenue machines. Migrating business-critical legacy apps over to the new tech may take time or may prove not to be economically feasibe due to complexities of the business logic they contain. Some organizations may decide to maintain them and build a façade of new cloud native apps around the legacy components. In these cases, the question becomes how to deploy, secure and monitor the two categories of applications in a consistent and efficient way.
Using the traditional methods of network security for continuous application deployments is not a trivial undertaking. It requires lengthy meetings between network admins, IT admins and developers – people who do not speak the same language. Solutions are typically manual, or based on custom and short-lived scripting, which renders them inherently inefficient and error-prone. This matter gets progressively worse when dealing with the mix of legacy and cloud-native components.
Application Centric Infrastructure (ACI) is Cisco’s next generation data center architecture designed to address the requirements of today’s traditional networks around network automation. ACI meets emerging demands that new applications require to be deployed within the network. Cisco ACI brings innovation around application-centric policy models in which connectivity is defined by consolidating endpoints (physical devices, virtual machines and containers) into endpoint groups (EPGs) in order to allow communication between the EPGs based on application requirements. Recently ACI added native support to Kubernetes clusters, which extends its flexible policy model to containers. Kubernetes pods with containerized apps can now be associated with distinct EPGs, similar to traditional virtual machines or bare metal.
Apprenda is a maker of a versatile container platform for rapid deployments of both cloud-native and traditional application workloads. Apprenda can manage deployments of microservices in any technology, a large variety of Windows applications, Linux services and legacy Java components in a uniform way.
The Apprenda platform natively integrates with ACI and provides secure containerization and automated network isolation of applications at deployment time. This automation applies to different types of workloads: cloud-native, legacy and mixed.
In the initial iteration of the integration, Apprenda relied on Contiv, a Cisco CNI plugin for Kubernetes, as an intermediary when it needed to create EPGs and contracts in APIC. Since the release of ACI 3.0, wherein Kubernetes became a first-class citizen, Apprenda has been using the native CNI plugin for Kubernetes to dynamically create policies and contracts between the workloads at deployment time. The native integration is now the default and preferred option, as it provides better performance and reliability of the automated application isolation at the network level.
The Apprenda – ACI integration helps with two use cases. It simplifies the process of attaching and configuring ACI fabric to the Apprenda platform and Kubernetes clusters. It also reduces the complexities of securing individual applications and their components during continuous deployment processes.
The process of enabling Apprenda on the ACI fabric is fully automated. A platform operator, typically an IT admin, can use the ACI initialization tool, built into the Apprenda Operator Portal.
All the operator needs to do is provide connection information and credentials to the APIC. Once these credentials are saved, the operator pushes the Initialize button. From there, the automated process takes over, sparing everyone from the need for lengthy meetings, delays and human error. Within seconds the platform becomes a part of the ACI-fabric including the Kubernetes clusters managed by Apprenda. The Apprenda configuration in ACI is based on at least two tenants.
The first tenant has an ANP and at least one EPG that protects the core Apprenda platform, which comprises several Windows servers where .Net workloads are deployed. The other ACI tenant is for the Kubernetes cluster managed by Apprenda.
The automated setup process also creates several contracts for some shared services like ICMP, L3-out, DNS, etc., and contracts that govern the communication between the two tenants.
One contract ensures that the platform can manage deployments of applications to the Kubernetes cluster. There is a separate contract for the guest application deployed to the Kubernetes cluster to be able to communicate with the platform core services and vice versa.
The integrated solution also reduces complexities related to securing individual applications and their components during continuous deployment processes.
Apprenda provides an abstraction layer for developers and product management to communicate business requirements and application SLAs to the infrastructure layers, prior to making the application live. For example, the platform takes these requirements and translates them into specific commands to trigger network segmentation and isolation. During the configuration phase, the developer has the opportunity to tell Apprenda how the app has to be protected at the network level by ACI. This can be accomplished with just one setting, Network Isolation Mode, which has the following options:
The selected value ultimately triggers the automation of ACI network policies while Apprenda containerizes the application.
In addition, developers can specify the external services that their applications may need to rely on. By default, shared services like DNS, ICMP, external connectivity, etc. are available for all apps to consume.
This is where developers can also tell Apprenda that their Kubernetes components need to talk to a legacy guest app deployed to the platforms legacy network segment. In this example, all legacy apps are relegated to a single EPG. In reality, the segmentation can be much more granular. The ACI contracts between the deployed application and all the selected services are created automatically during the deployment time.
This way, both new and existing apps can be secured by a single tool in a consistent and frictionless way.
Follow this link for additional technical details and to view video demonstrations of the setup and usage of this integrated solution.