Over the last year, the role of hybrid cloud has grown in importance; an ever-increasing number of organisations are realising that there is no singular universal IT solution, and that to meet the needs of all its stakeholders, internally and externally, they must embrace a variety of different technologies.
Organisations are turning to managed services to assure application performance in hybrid clouds
Everyone wants a consistent application experience, regardless of what infrastructure sits underneath the workload. Companies are increasingly using hybrid cloud as a managed service platform to achieve this. As cloud adoption continues to rise, the industry is moving beyond simple self-service portals for provisioning infrastructure to managed services platforms that are completely software based.
These automated, software-driven managed services are able to provide features that organisations find highly desirable. They have consistent Service Level Agreements (SLA), regardless of workload deployment models, across public, private, and vendor clouds, or on-premise, as well as seamless workload portability and automated migration. There is also the added benefit of having governance, risk, and compliance assurance across the user base and deployment model.
Cloud workloads are being automated above the orchestration layer
Until now, cloud services have concentrated on automating the orchestration layer. However, business needs evolve, and organisations are increasingly looking at automation above the orchestration layer in order to automate the whole application deployment process across any cloud infrastructure.
Having an automated framework for deploying an application in the cloud speeds up both initial deployment and ongoing DevOps integration. This not only makes application management easier, but it also accelerates delivery of the application owner’s business objectives. However, finding an integration point that will support platform-independent strategies is a real concern. As a result, organisations have started looking for an integration point above the orchestration layer that allows for automated deployments across multi-vendor platforms.
Some are using abstracted DevOps toolsets which can be used to deploy in other environments without creating additional API libraries of their own. This enables the selection of a cloud environment based on individual business requirements, including geography and SLAs, and leaves the door open to leveraging other clouds for future deployments. It also reduces the risk of vendor lock-in.
It’s important, though, to get to know the mechanisms behind toolsets that differentiate them from one another, to see which best addresses the real business requirements of the application.
Self-service provisioning and automation to support public, private, and hybrid cloud-based development teams is becoming a requirement, rather than an option. Organisations that have been slow to embrace self-service will find their development teams falling behind.
Programmable networks are also powerful enablers of hybrid cloud, allowing firms to roll out new operational sites much more quickly. Previously, they’d buy equipment, configure it, take it to the new site, and would need a highly skilled engineer to install it.
Now with generic programmable equipment and cloud applications, it is possible to templatise the way this is done, by taking a fairly simple device, putting the intelligence in the cloud and using a lower level engineer to install the equipment and the branch software, all in the cloud.
Container tools are becoming the new platform-as-a-service
When Docker was introduced in 2013, virtualisation started moving from the level of the machine to the level of the application. An open-source project that automates the deployment of applications inside software containers, Docker makes applications much more portable across hybrid cloud and on-premise infrastructures. In combination with Kubernetes, the open-source container cluster manager originally designed by Google, it’s quickly replacing Heroku cloud platform-as-a-service offerings in the market.
In 2017 we’ll see more widespread adoption of containers, but the transition to a fully containerised world will take a few more years. Initially, we’ll see traction in using Kubernetes as a deployment model for more complex workloads. Because support for Docker is variable across public cloud platforms, organisations are likely to resist jumping to Docker on multi-cloud. They’ll probably stick to using it on a single cloud platform, and achieve hybridisation in combination with their on-premise stack.
When adopting Docker and Kubernetes (or similar variants), organisations should make sure they have a clear strategy around image management, network access and security patching, service discovery, and container monitoring.
Network function virtualisation becomes the path to hybrid cloud nirvana
Nirvana in hybrid cloud is where one part of a service is run in a firm’s own data centre, a second part is on public cloud provider A, and the remaining part is in public cloud provider B. The firm is then free to determine where they want any element of the service to run based on performance, availability, privacy, or cost.
One of the major reasons why this ideal state has yet to be achieved is that the network elements in these hybrid domains have to be stitched together. At first, the answer was thought to lie in software-defined networking (SDN). Some enterprises attempted to use SDN to unite their hybrid cloud environments, but discovered that SDN is very complex. This remains a hurdle that most enterprises have yet to overcome.
By contrast, related but subtly different network function virtualisation (NFV) promises to be a much easier way of networking together hybrid cloud and hybrid IT environments. NFV is the process of moving services such as firewalls, load balancing, and intrusion prevention systems, away from dedicated hardware into a virtualised environment, for example as virtual appliances.
One of NFV’s advantages is that the virtual networking and security appliances it employs allows organisations to maintain control of IP addressing schemes, DNS, and routing choices as they stitch the network together. They allow firms to treat the cloud as an extension of their own network, one that uses the networking technologies, tools, and vendors they’re familiar with.
This is why we’ll see much more interest in NFV when cloud-enabling existing networks, and expect to see new networks being architected with hybrid cloud in mind.
NFV is also becoming the preferred enabler of containerisation
Container networking is different from traditional networking. Containers are very dynamic and short-lived, giving rise to a lot of unpredictable traffic flow. When a container is started, it needs to be registered in some directory; when it’s ‘killed’, everyone needs to know. This is done through a service discovery layer and processes that run on the console, using tools like CoreOS or Apache ZooKeeper.
The Kubernetes networking model requires that containers can communicate with network nodes and one another directly, and that a container sees itself as the same IP that others see it as. In Kubernetes, the IP address scope is a pod: all containers within a pod share the same IP address and have to use the localhost construct to communicate with one another.
There are a number of ways of approaching these containerisation networking challenges: from Docker networking options, to container-centric options, to SDN, and NFV.
However, if we accept that a greenfield Docker deployment is less likely than a hybrid deployment, then it boils down to a simple proposition: if the container is to run alongside existing Virtual Machine implementations, the NFV approach is most likely to be able to successfully address containerisation networking challenges.