The portability and isolation of containers simplifies the deployment of applications by providing sealed, reproducible environments. But many applications are best broken into parts that will need multiple containers, such as in a microservices architecture. In such an environment, a multitude of containers faces many of the same issues as a multitude of non-containerized applications. These include:

  • What if a container in the application goes down or failed a health check? Wouldn't it be great if that container could restart itself?
  • What if one microservice needs to be scaled up (i.e., multiple instances of the same container need to be run to meet the needs of a heavy load), but none of the other containers need to be scaled up?
  • What if some of the containers share pieces of configuration that all need to be updated if a change is made?
  • How are containers supposed to communicate with each other, and how to configure this?
  • How should logs from each container be managed?
  • How can we distribute containers between separate machines, geographically disparate data centers, or in a cloud?

These are all problems for which Kubernetes provides the answer.

Kubernetes exists as a cluster made up of a number of machines (nodes, in Kubernetes terminology). When you tell Kubernetes to deploy one of your containers it'll put them on these nodes. To facilitate the management of these containers, each node will also have some Kubernetes management processes. Knowing about these processes isn't important when it comes to deploying applications to Kubernetes. In other documentation you may see them referred to using the terms kubelet and kube-proxy.

The process of deploying your app will involve creating and modifying many pieces of configuration on the cluster. These pieces of configuration are called objects. These objects are designed in such a way as to make Kubernetes very modular. They can be swapped in and out, or added and removed, with little or no downtime for your application.

The Kubernetes cluster has an API server that exposes a REST API. This is how it receives your updates to its objects.

Elsewhere you may see the word "resources" used interchangeably with "objects". This document will use the convention of calling them objects to avoid ambiguation with the "resources" of Kubernetes' API. But be aware that you'll sometimes see and hear called "resources" the things that in this document are referred to as "objects".

Note: You may often see Kubernetes abbreviated as k8s. This is because Kubernetes starts with a "k", ends with an "s", and has 8 letters in between.