Core Concepts of Kubernetes and Containers

Part 1 of Kubernetes Series with CTO Leonardo Murillo

What do Star Trek, Containers and Kubernetes have in common?

Kubernetes is a container orchestration platform. What does that mean? Think about Google, and the incredible scale they need to support and deploy on their systems. The Google internal workload orchestration system was named Borg and their original Kubernetes project was Seven of Nine. Google built Borg to handle everything on containers. From Borg evolved Kubernetes. What Kubernetes does very well is it allows a large number of isolated machines to be referenceable as a single set of compute capacity. 

Before Kubernetes, There Were Containers

While the concept of containers has been around since the 1970’s, only in the past few years has Kubernetes, an extension of containerization, facilitated clustering at a large scale. 

Why containers were revolutionary is that they gave developers an easy way to package a deployment into an immutable image with a known state using very simple tooling.

Containers Offer Immutable Delivery

Immutable delivery means that you have confidence that the deployment you are handing off to the next person is exactly and precisely what you expect. An immutable image is a package that contains a complete version of your application and all of its dependencies. The immutable image contains everything that your application needs to run in its own isolated space. To learn more, you can visit this link: https://www.docker.com/resources/what-container. 

The container image is created by building a Dockerfile. The Dockerfile contains the steps that need to be executed for the image to be built. Each step in the Dockerfile produces its own layer. Each layer represents how the file system ended up at that particular point in time. The layers together are a frozen image of the file system. The tremendous benefit of container layers is that they give you an immutable state of the file system. 

When you run a container image, you’re loading all these different layers into a runtime environment. So you are actually running an instance of the container image. When you run an immutable container image, you create container instances which are identical. 

In the past, when people did configuration management, they tweaked files or made changes on the file system to make it match a desired state. With containers, the configuration is not managed, it is burned as layers. A container image cannot be modified, so every time you run it, you have confidence that the instance is going to be an identical copy.

Containers are Stateless

This means that the moment you stop your container instance, anything you store on it will get lost. This is something that is very valuable: you know that wherever it is deployed, you can expect a known, fixed state. You always know the exact state of your code and your environment, at a large scale. 

Then Along Came Kubernetes, the Control Plane and the Kubelet 

Kubernetes is an extension of containerization technology. Kubernetes makes many different nodes work together. It has a control plane, which is a layer of abstraction used to deploy containers. Underneath this control plane, you can have any number of nodes. Nodes provide CPU and memory capacity. On each one of these nodes, Kubernetes installs an agent called a kubelet, and that kubelet communicates via API calls to the control plane. Kubernetes uses this information from the kubelet to determine where it is best to deploy the various images your solution requires. 

A simplified overview of the Kubernetes platform.

Each kubelet reports to the Kubernetes control plane on the capacity, health, and available memory of the node. Using this information, Kubernetes will fetch the container image and find a node to run it. You can deploy any number of containers using the same image. The deployment is almost instantaneous, because everything needed exists within that container image. The health information reported to the control plane by the kubelet is also used by Kubernetes in the deployment of pods across any number of the nodes and also in directing traffic.

Kubernetes Deployment

Suppose your ReplicaSet indicates to run 10 replicas. The pod will be running a single web container. In the manifest file, you also specify that you want to give each container 200 MB of memory, and N amount of CPU capacity.

During deployment, the Kubernetes control plane checks the manifest and looks up where you have enough capacity for the desired solution. Next, the control plane picks the best candidate nodes to run these 10 pods. 

Kubernetes deployment can be simplified using Helm, a packaging manager for containerized architectures.

Kubernetes and Traffic Direction

Kubernetes handles the directing of traffic to your service. Here is a basic scenario. You want to deploy a set of web containers. You have a Kubernetes deployment that references the manifest with a ReplicaSet which specifies: 

  1. Your custom web container image you want deployed in every pod.
  2. The number of pod replicas you want to deploy.
 

These are deployed and labeled. Now you have, lets say, 10 pods running a web server.

However, they are not yet reachable as a unit, and they are not reachable from the outside world. The solution is to deploy a Service and use labels. Labels are key-value pairs that you can define arbitrarily, for example, “service:  web”. The Service matches the pods by a label, and allows you to address them all as a unit. 

For example, you can deploy a Service which is set to match any pod that has key named “service” with a label of “web”. This label allows you to refer to all of them together through that Service. That Service has meaning only inside the cluster, and it is not yet exposed to the outside world.

Next, you want to route traffic from the outside world to a Service. This is where an Ingress comes in. The Ingress manages traffic to the Service. The Service ends up routing traffic to any one of the pods that match the selecting label. With those three objects, the ReplicaSet, the Service, and the Ingress, you have effectively built a load balanced cluster of HTTP servers accessible to the outside world that is both distributed and self-healing. 

If a container dies, your ReplicaSet will guarantee that Kubernetes will look for a node that has available capacity to maintain your desired replica count. So if one of the “web” labeled pods is lost, Kubernetes will know to deploy a pod somewhere else. If one of your nodes dies, Kubernetes will handle the work of getting all the pods that were deployed on that node to another healthy machine with enough capacity.

Kubernetes handles the routing of your traffic into your pods and the name resolution of your service within your cluster. Kubernetes has what is essentially a DNS service built in, so it handles all of these things for you. All you have to do is declare these resources in the manifest. The manifest is YAML declarations of what you want to have deployed. The manifest eliminates the complexity of having to do all the setup and configuration manually.

Key Kubernetes Components

Control Plane
On top of the nodes, there is a set of master nodes that acts as a control plane and presents an API for the various worker nodes. The control plane plays a role in deployments and in directing traffic. The control plane selects desired nodes and deploys pods.
Kubelet
A kubelet is an agent that runs on each node. Each kubelet makes sure that all the containers are running on the node. The kubelet can launch and kill pods, depending upon the health of the node and the manifest.
Node
A worker machine, which can be a virtual machine, bare metal, or any machine running the kubelet and the the kube proxy. The node is available to the Kubernetes master. The Kubernetes master deploys resources onto the node.
Pod
Inside of a node, you have one or more pods. Pods are the deployable units in Kubernetes. It is the most basic building block of deployment within Kubernetes. Think of a pod as the unit component of compute workload. The pod indicates which containers you are going to run as a unit. 
Container 
Inside of a pod, you have one or more containers running. Containers are the running instances of your application. The kubelet ensures the containers are running and healthy.
Manifest file

A Kubernetes file that indicates which objects to use and their attributes, written in YAML. It  declares which objects you want to have Kubernetes deploy for you.
A Dockerfile versus a Kubernetes manifest file:

  • A Dockerfile is a text file with commands to assemble the container image, such as information about the file system and any files needed to run the application. The Dockerfile is used in containerization, not necessarily unique to Kubernetes.
  • A manifest is a Kubernetes file, written in YAML, which declares which objects you want to deploy on your Kubernetes cluster based on what your solution requires. In your manifest file you will likely have Services, Deployments, Ingresses and other such resources. Kubernetes takes care of creating any lower level resources necessary to maintain your manifested, desired state. 

Kubernetes Objects

Below are different objects, declared in the manifest file, which tell Kubernetes which resources you need for your solution to be operational. This includes the creation of the underlying pods and the objects that will manage your pods.

ReplicaSet
Pods are created by a ReplicaSet. A ReplicaSet indicates N number of replicas, minimum and maximum. For example, ReplicaSet decides where to put 10 replicas of a container image running and how to distribute the deployment and the traffic. ReplicaSet is at the pod level and delegates local container restarts to the kubelet.
Deployment 
Manages the lifecycle of a ReplicaSet.
StatefulSet
Manages stateful applications.
Service
A service gives you one entry point into a variable number of pods. Services can be exposed outside the cluster through an external LoadBalancer. All public cloud managed Kubernetes services automate the creation of native load balancers for your services.
Ingress
Manages access to services from outside the cluster using a single LoadBalancer.
CRD 
Custom Object Types make Kubernetes very extensible. Using CRDs, anyone can build tooling and very easily extend the platform itself. An example is a custom load balancing engine.
Labels 
Labels are attached to objects and are used to match attributes to each other.

For more information about Kubernetes, please contact Qwinix.

talk to an expert

Let’s Talk About It

Connect with a Qwinix expert to bring leading-edge insights and solutions to your Google Cloud strategy.