Container adoption and the road to multi-cloud – why observability matters

More companies are looking at multi-cloud for their IT strategies. The reason for this is that businesses want to keep ahead of their competitors and retain control over their infrastructure plans. Around multi-cloud, there are competitors of all sizes, from the biggest public cloud providers to smaller niche providers and co-location operators, through to internal data centre teams, all vying to be the lead on supporting major application deployments that can run across cloud and internal data centres.

With all the options involved, multi-cloud can be reached via many different routes. What makes the difference today? Containers.

Containers and application design 

Software containers provide a way to package application elements as small and transferable elements. Rather than virtualisation – where virtual machines include operating systems and applications to run – containers are much smaller. They run on top of the Linux operating system and tap into the kernel, sharing resources across all container images and making the application containers themselves more efficient.

More importantly, the most popular container orchestration system, Kubernetes, has made it possible to move these container images between different providers and carry on running. Rather than being tied to internal data centre deployments, to specific data centre locations or to any single service provider, enterprises can migrate easily as and when suits them. For DevOps teams, container orchestration should make the move to container-based applications easier, even while the underlying infrastructure can become more complex.

According to Gartner, more than 75 percent of global organisations will be running containerised applications in production in 2022, compared with under 30 percent today. Container options like Kubernetes and Docker have grown rapidly in adoption – according to our research, around 30 percent of companies are running Docker on AWS in 2019, up from 18 percent in 2016 and 24 percent in 2017. Similarly, native Kubernetes deployments have grown from 8 percent in 2017 to 20 percent in 2019.

The reason for this is that containers represent a big change in how applications are deployed. Using containers, you can scale up and down in response to demand levels much faster than traditional IT infrastructures or virtualised environments allow. Using microservices design and API connections, you can also replace or extend these application components more easily than traditional application designs allow.

Theory vs practice

In theory, you can lift a set of containers from your internal data centre and run them to deliver a service in a public cloud, on a managed service, and bring that service back again too. Kubernetes should, therefore, help companies to adopt a hybrid cloud or multi-cloud strategy where the organisation has more control over its destiny. For the public cloud providers, the advent of Kubernetes should drive up more adoption of their services too. In practice, multi-cloud is still in its early stages – according to our research, currently, around nine percent of companies are running multiple clouds.

While multi-cloud deployments are still in their initial stages there is a strong correlation between this and use of Kubernetes – while around 20 percent of AWS customers use Kubernetes, this goes up to 59 percent for those running AWS and Google Cloud Platform (GCP) and to 80 percent for those running AWS, GCP, and Azure together.

Theory, Meet Practice

 However, the move to containers can be more challenging in the longer term. The ability to run containers in the same way wherever the right infrastructure is located should provide more freedom to deal with availability and scalability concerns. What needs to be considered is how to wrap the right management approach around containers, so that the right decisions can be made consistently over time.

For DevOps teams, this means looking at observability – how to get the right data on details like each container instance through up to the whole application service. Observability comes from using application and infrastructure logs, metrics and tracing data together to build up the best possible picture of these new services.

Without data, it is difficult for developers building applications in containers to see problems. Similarly, it can be hard for infrastructure and operations teams to know that they are running applications in the most efficient way, or if they need to budget for additional capacity. From a practical perspective, gathering this data from each machine image or container is itself challenging, too.

While it is possible to orchestrate and manage containers at a massive scale, the observability side has not kept up. For companies with applications running across multiple data centre locations, or combinations of internal and external cloud services, this observability should be critical in order to control spend and budget allocation. However, it can easily be missed.

The specifics of containers

The first issue is that containers don’t produce data in the same way as virtual machines or more traditional physical servers. Each container can provide data on operations, but the information will stay within the container unless it is collected and centralised. For teams running applications across multiple locations, this process has to be considered from the start.

Containers also lose their data records when they shut down. For services that rely on flexibility and scaling up and down in response to demand levels, this can make it extremely difficult to track use patterns and application behaviour across different sites. Without this machine data, it becomes hard to run these multi-cloud applications over time.

Secondly, each of these images can be run anywhere. Rather than information automatically being gathered in one place, you either have to correlate data from multiple containers running across different instances or look at data specifically from the application itself. Neither of these approaches will be enough on its own to provide the right level of context for making decisions.

Getting a complete overview of these new applications – whether they are hosted internally, externally or as a mix of both – will be essential if you make the move to implement containers. Without this data, you run the risk of allocating resources in the wrong ways. By understanding the landscape developing around your applications, you can back the right approach for the future, rather than taking the wrong route. Regardless of who is viewed as winning the multi-cloud crown, it’s worth understanding how the race was won too.

Written by Mark Pidgeon, Vice President Technical Services at Sumo Logic. 

 

More
articles

Menu