Containerization Isn’t Going Anywhere.

Why it’s here to stay for good

Blake Sanie
codeburst

--

Photo by Ian Taylor on Unsplash

I remember the days when I could simply deploy my code onto a Linux virtual machine and call the job done. Those days are no more.

Today, containerization is all the hype, and for good reason; it brings a shipment of power and flexibility to software development, deployment, and runtime. This can be applied to any operation system, any machine, any project.

The most commonly used containerization technology is Docker. Let’s explore why it’s so useful and how it deviates from older methods.

Virtual Machines vs. Containers

In older times, server infrastructure (hardware) was divided into multiple virtual machines (VMs) via a hypervisor like VMWare. This allows for the server’s resources (CPU, RAM, storage, etc) to be allocated among different guest operating systems, each representing a virtual machine.

Graphic courtesy of Author

Containerization introduces a more modern approach. Instead of partitioning a server into virtual machines, installing Docker on a single host operating system allows for multiple isolated containers to run at their own scale. These containers are fast and lightweight, as they can share common packages, binaries, and operating systems.

In short, VMs virtualize fixed hardware, while Containers virtualize the dynamic operating system.

Networking

Distributed Applications

One major benefit of containerizing applications is the ability to create a distributed network, or web of connected nodes. As a result, applications become horizontally scalable, where nodes can be added or subtracted at any time. In addition, the network enables fault tolerance. If one container goes down, the service is not lost; the other nodes in the network will carry the necessary weight until the outage is resolved.

Photo by Nastya Dulhiier on Unsplash

Networking also includes organizing containers into microservices, each focusing on a specific task or responsibility. These distributed modules talk to each other, triggering events and passing data, to ensure the application is operational at large. Furthermore, each microservice can be a distributed system in itself, allowing for even greater flexibility.

Container Orchestration

Tools like Kubernetes and Docker Swarm specialize in orchestrating a network of containers: automating processes, defining configurations, scheduling events, autoscaling clusters, load balancing, maintaining replicas, and more. With these tools, your orchestra of containers evolves into a symphony.

Photo by Samuel Sianipar on Unsplash

Deployment

Managed Cloud Containerization

Your containerized application will eventually live in the cloud, likely hosted on AWS, Google Cloud, or Microsoft Azure. Sure, you can spin up a VM if you choose, but these providers make deploying your containers completely effortless. For instance, with AWS Fargate, Google Cloud Run, and Azure App Service, containerization is fully managed: upload the containers, and the rest is handled for you, hands free. This is ultimately minimal compared to redeploying a VM by manually starting your containers and microservices from the command line.

Photo by İsmail Enes Ayhan on Unsplash

Compatibility with CI/CD

Deploying containerized applications is even more seamless when leveraging Continuous Integration and Continuous Delivery (CI/CD). This means whenever code is pushed or merged in your repository, your containers can automatically redeploy to your web service. Setting up this essential automation takes minutes with a service like Github Actions and will save you hours down the road.

Containerization is Future-Proof

Since technology is continuously evolving, containerization will eventually be overthrown by another architecture, right? Wrong. Though the state of containerization today is not perfect, the practice of containerizing applications will not fade. The introduction of Containerization made virtualization more abstract since the operating system can be virtualized instead of low-level hardware. In fact, this pattern allows containers to be virtualized themselves, abstracting application environments even further.

When considering speed, practicality, organization, scalability, fault tolerance, and deployment, containerization beats virtual machines in every category. The main change the future will bring is further simplifying how microservices interact with one another. The three major cloud service providers already manage this for developers, and ease-of-use is only to improve in the future.

The benefits and features of Containerization mentioned in this article are only the tip of the iceberg. I strongly encourage you to read further and more thoroughly understand the true power and potential of virtualized containers.

Thanks for reading.

--

--

‌‌‎Inquisitive student.‎‌‌ Aspiring engineer. Photography enthusiast. Curious stock trader. blakesanie.com