Why Kubernetes Matters: Learn Its Key Benefits and Real-World Use Cases

If you're involved in software development, you've probably heard of Kubernetes. This powerful container orchestration tool has rapidly gained in popularity since its launch in 2014, and is now widely used across a range of industries. But what exactly is Kubernetes, and why does it matter? I wrote this article to know the answer to these questions and understand why Kubernetes is such an important tool for modern application development and deployment.

What is Kubernetes?

According to Kubernetes documentation, Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

At first, that definition didn't help me to understand what it is and why is useful or when we have to use it.

To understand why developers and organizations use Kubernetes is necessary to give more context and explain how developers and organizations deployed applications before containers existed.

I will explain a little bit, about how things work when developers deployed traditionally, using virtual machines and containers.

Traditional Deployment

When we create an app, let's say a Django or Rails app, or any app using Python, Java, Go, Rust, etc. And we run it, we are running the application on top of the OS, and the OS on top of the hardware.

This way has its issues, if we have many applications running on our machine, one of them may use more resources than the others, making the other applications underperform. Sure, if you are running one app, probably there is no problem. But if you are part of an organization that runs many apps to serve a lot of clients, customers, users, etc. It will be a concern. And using a machine for each application is expensive.

Virtual Machines

To solve this issue, organizations start to use Virtual Machines(VMs). A VM sits on top of the OS, and every VM has an OS running it. So, we have an application running inside a VM, isolated from the others applications, which are running in others VM, this way we can control the resources the application consumes. Also, we can have multiple VMs running inside the same physical server.

An example of virtual machines is Java Virtual Machine and Windows Subsystem Linux or WSL, which allow Windows users to use a Linux Distro inside Windows.


Then, organizations started to use containers. Containers are similar to VMs, are considered lightweight, they can share the Operating System (OS) among the applications. Similar to a VM, a container has its own filesystem, share of CPU, memory, process space, and more.

They are portable across clouds and OS distributions. That way, we can develop an application in a Mac OS system or a Windows system and run it or deploy it in a Linux server. Oversimplifying, this is the process when we want to deploy an application on a cloud service: We containerize the application and then deploy it on the cloud service.

Why we need Kubernetes and what it can do

In a production environment, we have many applications running in containers. But, what happens if a container goes down? We have to start another container. But instead of starting a container manually, it will be convenient to have a system that does it for us. That is what Kubernetes does, it takes care of scaling and failover for our applications.

According to the official documentation, Kubernetes provides us with:

  • Service discovery and load balancing. Kubernetes can expose a container using the DNS name or using their IP address. If traffic to a container is high, Kubernetes can

  • Load balance and distribute the network traffic so that the deployment is stable.

  • Storage orchestration. Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.

  • Automated rollouts and rollbacks. You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.

  • Automatic bin packing. You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.

  • Self-healing. Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.

  • Secret and configuration management. Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

What Kubernetes is not

Kubernetes is not a traditional, all-in-One PaaS (Platform as a Service) system solution. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, and load balancing. And allow the users to integrate their logging, monitoring, and alerting solutions, like for example using Prometheus, Grafana, OpenTelemetry, etc.

However, as the documentation says, these default solutions are optional and pluggable. Which gives the users the flexibility to build their platforms.

The documentation mentions what features Kubernetes not includes:

  • Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.

  • Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements.

  • Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, MySQL), caches, or cluster storage systems (for example, Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the Open Service Broker.

  • Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept and mechanisms to collect and export metrics. Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.

  • Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.

  • Additionally, Kubernetes is not a mere orchestration system. It eliminates the need for orchestration. The technical definition of orchestration is the execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn't matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.

Open Source Projects to Integrate With Kubernetes

As mentioned before, there are features that Kubernetes doesn't include, but we can add them using other projects. We usually will find many projects to integrate with Kubernetes on the web page of Cloud Native Computing Foundation (CNCF).

Here is a list of projects that we can integrate with Kubernetes (This list is not exhaustive):

  • Argo - Kubernetes-native tools to run workflows, manage clusters, and do GitOps right.

  • Prometheus - Monitoring system and time series database.

  • Jaeger - A distributed tracing platform.

  • Linkerd - Ultralight, security-first service mesh for Kubernetes.

  • Emissary-ingress - An open-source Kubernetes-native API Gateway + Layer 7 load balancer + Kubernetes Ingress built on Envoy Proxy.

  • Cert-Manager - Automatically provision and manage TLS certificates in Kubernetes.

  • Contour - Contour is a Kubernetes ingress controller using Envoy proxy.

  • KubeEdge - Kubernetes Native Edge Computing Framework.

  • Kyverno - Kubernetes Native Policy Management.

  • KubeVirt - Kubernetes Virtualization API and runtime to define and manage virtual machines.

  • KubeVela - a modern application delivery platform that makes deploying and operating applications across today's hybrid, multi-cloud environments easier, faster and more reliable.

  • Open Kruise - Automated management of large-scale applications on Kubernetes.

  • Litmus - An open-source Chaos Engineering platform that enables teams to identify weaknesses & potential outages in infrastructures by inducing chaos tests in a controlled way

  • Keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides an event-driven scale for any container running in Kubernetes.

  • Keptn - Keptn is an event-based control plane for continuous delivery and automated operations for cloud-native applications.

  • Flux - Open and extensible continuous delivery solution for Kubernetes. Powered by GitOps Toolkit.

  • CoreDNS - CoreDNS is a DNS server that chains plugins.

As I said before, It wasn't an exhaustive list. But if you want to know about more projects, you can check this list from CNCF. Not all projects compatible with Kubernetes are listed there, just the ones founded by CNCF.


Before writing this article I didn't any idea about Kubernetes except that is a tool to manage containers. But for me, it was difficult to imagine its importance and how useful is without context. Probably the people who wrote Kubernetes documentation know that, and mention how developers deploy their apps traditionally, using virtual machines and with containers. Without that information is difficult to appreciate what Kubernetes does for us.

Also, is amazing how containers and Kubernetes develop their ecosystem. I found curious that many projects are developed just to use on Kubernetes or are compatible with it. Which I think is thanks to its flexibility and the decision to let the users choose what projects to integrate with it.