Deploying and managing applications with kubectl
is a useful skill to have, but you'll face some problems if you rely on it alone.
- If you use multiple environments (e.g. dev, stage, prod), each of these might have their own values for such things as namespaces and database connection properties. You'll have to either maintain separate versions of your manifest files for each environment, or edit all of them prior to deploying to an environment. This can become a maintenance nightmare.
- There'll likely be certain values that are referenced many times across your manifest files. For example, if you have a microservice that exposes a port, then in order to change this port you'd probably have to edit the manifests of a Deployment, Service, and possibly a ConfigMap. There are many other values like this, such as namespaces, labels, and ConfigMap names. It'd be preferable to have such values stored only one time in one place.
- Your individual images may have versions, but your application as a whole could have versions as well. You may want to designate a set of manifests as version 1.0.0, another as 1.1.0, etc., and then be able to deploy/upgrade/rollback based on version. But there's no way for kubectl to be aware of these versions. You'd have to keep a folder full of manifests for each version.
- What if someone else wants to deploy/upgrade/rollback your application? You could have that person clone a git repository containing your manifest files, but as it turns out there are tools for simplifying this.
The solution for all these problems is Helm. You can think of Helm as a package manager whose packages are bundles of parameterized Kubernetes manifests. In Helm terminology, these packages are called charts.
Installing Helm
If you're using Rancher Desktop, then it will have installed Helm as part of its own installation, just like kubectl. But Helm can be installed separately.
mac
The recommended way to install Helm is via Homebrew with brew install helm
.
Windows
Install Helm by following the instructions on this page. You may then also want to put helm on your PATH.
Creating a Helm chart
In this example, we'll create two Helm charts. The result will be the same microservice backed by Redis that we setup in the Secrets page.
- The first chart will be for Redis. This one chart will provide a template from which to create 3 different environments: dev, stage, and prod.
- The second chart will be the microservice. We'll give it different values depending on whether we want it to connect to the dev, stage, or prod Redis.
Since you'll be recreating the application added in the Secrets exercise, go ahead and delete any Kubernetes objects that remain on your cluster from that exercise.
Creating and installing the Redis chart
Create a new Helm chart from the command line by running helm create MyRedis
. This will create the following directory structure:
MyRedis
|--charts
|--templates
| |--tests
| | |--test-connection.yaml
| |
| |--_helpers.tpl
| |--deployment.yaml
| |--hpa.yaml
| |--ingress.yaml
| |--NOTES.txt
| |--service.yaml
| |--serviceaccount.yaml
|
|--.helmignore
|--Chart.yaml
|--values.yaml
These are far more than the minimum files needed for a Helm chart. Helm initializes the chart with an example application. However, this example is perhaps more complicated than necessary for a beginner. Take a look at the files if you want. But then we'll delete some of them, since the app we'll replace it with more directly demonstrates the capabilities of Helm.
Delete everything inside the templates
folder, including the tests
folder.
You'll now have these files and folders in your chart's directory:
- templates: In this folder is where your manifests will go. But the folder is called "templates" and not "manifests". That's because double-curly-brace (
{{ }}
) notation is used in them to swap in values prior to being sent to the cluster. This is how a single file can actually be used as the template for the manifests of multiple environments. - values.yaml: This is where you define the values that are injected into the templates. The templates will have paths in them, such as
{{ .Values.namespace }}
. This means that thenamespace
field specified in files like these should be swapped in at that spot. - Chart.yaml: Metadata of your chart can be found here. This is where you specify your chart's version.
- .helmignore: This file contains patterns that Helm tests against other files or folders. It will ignore any file that matches a pattern. If you're familiar with git's
.gitignore
, this works the same. - charts: Dependency charts can go in this folder. Our example will not use dependencies.
Create a file with the following contents in your templates
folder (The filename doesn't matter. You can call it something like redis-all-in-one-template.yml
):
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}
---
apiVersion: v1
kind: Service
metadata:
namespace: {{ .Values.namespace }}
name: redis-service
spec:
ports:
- port: 6379
selector:
component: redis
---
apiVersion: v1
kind: Pod
metadata:
namespace: {{ .Values.namespace }}
name: redis-pod
labels:
component: redis
spec:
containers:
- name: redis
image: {{ .Values.image }}
ports:
- containerPort: 6379
This template allows for the insertion of an image tag for its Pod as well as a Namespace common to all the objects. We'll now add these into values.yaml
. Open that file and replace all it's contents with the below two lines:
namespace: redis-dev
image: bmcase/loaded-redis:example-dev
Now you're ready to install your chart. The command for doing this is helm install redis-dev MyRedis
. Run this command from the folder outside your chart's folder (The "MyRedis" part of that command refers to this folder). Run it and you should see output like the following:
$ helm install redis-dev MyRedis
NAME: redis-dev
LAST DEPLOYED: Sun Jun 2 15:25:33 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: Non
The dev environment has been created. Take note that in our helm install
we gave the installation the name redis-dev
. In Helm terminnology, redis-dev
is the name we gave to the release. If you later want to modify or uninstall the release, you can refer to it by using this name.
Now for stage and prod. Change values.yaml
to have the below contents and run helm install redis-stage MyRedis
:
namespace: redis-stage
image: bmcase/loaded-redis:example-stage
Change the values again to the below and then run helm install redis-prod MyRedis
:
namespace: redis-prod
image: bmcase/loaded-redis:example-prod
Adding the secrets
Use the method described in the page on Secrets to create Secrets in the default Namespace containing passwords for each Redis environment. The passwords are as follows:
- dev password is
password123
- stage password is
y3dHr34T4
- prod password is
uJB4pYOto
Make sure to give each secret a different name.
(These are not being added via Helm, and you wouldn't want to add Secrets via Helm, since that would involve storing the values of your secrets in the Helm chart.)
Creating and installing the microservice chart
Create a new chart by running helm create ShoppingCart
. As before, delete the entire contents of the templates
folder. Then add the following template:
apiVersion: v1
kind: ConfigMap
metadata:
name: shopping-cart-configmap-{{ .Values.shoppingCartConfigMapVersion }}
data:
my-conf: |
server.port={{ .Values.serverPort }}
redisprops.host={{ .Values.redisHost }}
redisprops.port={{ .Values.redisPort }}
---
apiVersion: v1
kind: Service
metadata:
name: shopping-cart-service
spec:
ports:
- port: {{ .Values.serverPort }}
selector:
component: shopping-cart
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: storefront-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: shopping-cart-service
port:
number: {{ .Values.serverPort }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: shopping-cart-dep
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
depLabel: shopping-cart
template:
metadata:
name: shoping-cart-pod
labels:
depLabel: shopping-cart
component: shopping-cart
spec:
containers:
- name: shopping-cart-ctr
image: {{ .Values.imageName }}:{{ .Values.imageVersion }}
ports:
- containerPort: {{ .Values.serverPort }}
volumeMounts:
- name: properties-volume
mountPath: /usr/local/lib/override.properties
subPath: override.properties
- name: passwords-volume
mountPath: /usr/local/lib/passwords.properties
subPath: passwords.properties
volumes:
- name: properties-volume
configMap:
name: shopping-cart-configmap-{{ .Values.shoppingCartConfigMapVersion }}
items:
- key: my-conf
path: override.properties
- name: passwords-volume
secret:
secretName: {{ .Values.redisPwSecretName }}
items:
- key: {{ .Values.redisPwSecretPath }}
path: passwords.properties
Now go to the root directory of the chart, open values.yaml
, and replace its contents with these:
imageName: bmcase/shopping-cart
imageVersion: 0.0.1
replicaCount: 1
These will be properties that will be used regardless of which Redis environment and Secret are used. Create another file, values-dev.yaml
, which will have the following, dev-specific properties:
shoppingCartConfigMapVersion: dev-0.0.1
serverPort: 54321
redisHost: redis-service.redis-dev
redisPort: 6379
redisPwSecretName: redis-pw-dev-secret
redisPwSecretPath: passwords.properties
(Replace redis-pw-dev-secret
with whatever you named the Secret containing the dev password, and passwords.properties
with whatever you named the file that contained the password which you passed to kubectl create secret
.)
This tells it what it needs to know to connect to our Redis dev environment. Note that it makes use of the redis-dev
namespace in the Redis host. We'll end up creating two more of these values files for stage and prod.
One last time, if you're using Rancher Desktop you should remove any previous Ingress. Then install your chart with helm install shopping-cart ShoppingCart -f ShoppingCart/values.yaml,ShoppingCart/values-dev.yaml
. (The -f
option here lets us specify both values files.)
Keep in mind that this Spring Boot API has some startup time before the endpoint will be available. Once it is, you can test your installation by navigating to http://localhost/api/v1/shopping-cart. If successful, you'll see some dummy data from the dev environment.
Switching an installation to use a different environment
Create another values file, values-stage.yaml
, and give it the following contents (changing the value of redisPwSecretName
and redisPwSecretPath
as appropriate for the Secret you created):
shoppingCartConfigMapVersion: stage-0.0.1
serverPort: 54321
redisHost: redis-service.redis-stage
redisPort: 6379
redisPwSecretName: redis-pw-stage-secret
redisPwSecretPath: passwords.properties
You can modify an existing installation with helm upgrade
. Run helm upgrade shopping-cart ShoppingCart -f ShoppingCart/values.yaml,ShoppingCart/values-stage.yaml
to switch it to using stage Redis. Once it's complete, test again with http://localhost/api/v1/shopping-cart.
Finally, create a values-prod.yaml
file with the following contents (changing the secret values as appropriate):
shoppingCartConfigMapVersion: prod-0.0.1
serverPort: 54321
redisHost: redis-service.redis-prod
redisPort: 6379
redisPwSecretName: redis-pw-prod-secret
redisPwSecretPath: passwords.properties
Upgrade it with helm upgrade shopping-cart ShoppingCart -f ShoppingCart/values.yaml,ShoppingCart/values-prod.yaml
and test again with http://localhost/api/v1/shopping-cart. This should contain all the data that was present in stage, plus a bit more.
Note: Like kubectl
, the commands of helm
can receive Kubeconfigs through the KUBECONFIG environment variable or by a --kubeconfig=
flag.
In these examples, we've had a single environment for our shopping cart API, and changed it via Helm to point to 1 of the 3 Redis environments. In most projects, you'd probably also have multiple environments of the API, too. In these cases, you'd still want to use a common values file plus values files specific to each environment. The difference would be that you'd always use values-dev.yaml
for dev, always values-stage.yaml
for stage, etc.
Upgrading an installation when there's a new version of an image
Let's say you add a new feature to your app. In order to upgrade your installation, you'll:
- Build a new image
- Tag the image
- Push the image to a repository
- Change your chart's
values.yaml
to reference the new image's tag - Use
helm upgrade
to upgrade your installation
We've been using v0.0.1 of our shopping cart app's image, but there is a v0.0.2 that adds a new endpoint. This endpoint lets you get the shopping cart for a certain session, instead of always having to get the carts for all sessions. This image is already built and available in its Docker Hub repository, so start your upgrade by changing the imageVersion
in values.yaml
from 0.0.1
to 0.0.2
. Then use helm upgrade shopping-cart ShoppingCart -f ShoppingCart/values.yaml,ShoppingCart/values-prod.yaml
.
Once it's complete, you can test the new endpoint at http://localhost/api/v1/shopping-cart/s/100000000.
On defining Services and Ingresses in separate Helm charts
In the examples on this page, the Service, Ingress, Deployment, and ConfigMap have been all crammed together in the same Helm chart. It's introduced in this way for simplicity's sake. But you can compose your application from multiple Helm charts, and whenever possible, you should. My recommended way to do this is:
- One Helm chart manages your Services
- Another manages your Ingress
- Your secrets are added not via Helm, but manually with kubectl
- Each other logical unit of your application (i.e. each microservice, API gateway, or web server) has its own Helm chart that manages both its Deployments and ConfigMaps
One reason people have for putting all of these in one chart, is that they want to push their chart to Helm's repository, and then have others be able to pull and deploy their whole application with a single command. This is compelling from a point of view of simplifying the experience. But it can lead to problems if your application is under continuous development.
For example, let's say you have a single chart for your whole application, and you want to make an update to one of its several microservices. You edit some values that only apply to that microservice and use helm upgrade
. But, whether due to some error or some interaction you hadn't thought of, it also causes an erroneous change to be made to a Service's template. Now your service might become inaccessible, and what you thought would be a small change instead becomes downtime for your application.
Sure, if you always perfectly execute everything, you'll avoid mishaps like that. But it's better to just structure your deployment process so as to eliminate the possibility.
- You're rarely going to ever make changes to your Services and Ingresses, so they'll remain untouched and always available.
- Your development will much more often result in changes to your Deployments and ConfigMaps. Separate helm charts will ensure you change only these, and only the ones that actually need to be changed.
Other useful Helm capabilities
There are some additional capabilities Helm has which would be good to know. In order to illustrate these, it will be helpful to fabricate a scenario in which something goes wrong in one of your deployments, causing you to want to roll it back. What we'll do in the below examples is apply an update that purposely configures the deployment to use the wrong host for Redis. Then we'll see how helm commands can be used to quickly revert back to the release that was working.
Options for helm upgrade
- In the
values-prod.yaml
for the ShoppingCart chart, changeredisHost
to beredis-service.redis-prodA
, or any other host that wouldn't actually exist. This would create the error that we'll use the following exercise to correct. - Then increment the value of
shoppingCartConfigMapVersion
toprod-0.0.2
. This will result in the name of the ConfigMap being changed in both the ConfigMap and Deployment portions of the template. As mentioned in the page on Configmaps, it's important to change the ConfigMap name whenever anything in the ConfigMap itself changes. And the page on Probes walks you through an example of what can go wrong if this is not done.
Deploy it with the below:
helm upgrade --install --atomic --timeout 3m shopping-cart ShoppingCart -f ShoppingCart/values.yaml,ShoppingCart/values-prod.yaml
Once the API has started, you'll notice that requests to the API endpoints result in a nasty 500 error. We'll get to fixing that in a moment, but consider the new options that were added to helm upgrade
above:
--install
helm upgrade
will normally fail if the given release already exists. The --install
option will in such cases have helm upgrade
execute as if it were helm install
instead. So, it'll work regardless of whether the release is already there.
--atomic
In case a deployment fails, Helm's default behavior is to leave on the cluster any part that had been successfully deployed prior to the failure. This is usually undesirable, since it can mean that part of your application is according to a new version you were trying to deploy, and the rest is still as it was in the old version.
The --atomic
option will cause Helm on failure to rollback any objects to the versions they were prior to this upgrade, and to remove any new objects that were added. It is recommended that you always use the --atomic
option unless you have a specific reason why you shouldn't.
--timeout 3m
The --timeout <duration>
option lets you tell helm upgrade
how long it has to try and successfully apply all parts of the chart until it should fail the deployment. If this option isn't used, Helm's default timeout duration is five minutes. During this duration, if a Pod fails to become Ready, it will bring down the Pod and try to create it again, until it succeeds or the duration is up.
Other options
--kubeconfig=<path/to/kubeconfig>
: like all other Helm commands,helm upgrade
can use this in the same way askubectl
does.--set <key>=<value>
: this can be used to override any of the values from the various values files. For example, with the ShoppingCart chart you could use--set replicaCount=3
to override the correspondingvalues.yaml
setting.--history-max <number>
: this changes the maximum number of revisions Helm remembers for a release. Specify 0 to have it remember all revisions. (See below for more on revisions.)
Despite using both --atomic
and --timeout
, our upgrade was "successfully" deployed and the API is now in an unusable state. This is because the Kubernetes cluster doesn't know anything about the actual behavior of our cluster and so uses the most minimal standard to determine whether a Pod is Ready. The next page on Probes will go over how to prevent the cluster and Helm from mistaking a failure as a success.
But first, let's see how to fix the API with just Helm commands.
helm list
helm list
shows all the releases in the scoped namespace:
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
redis-dev default 1 2024-06-02 15:25:33.041073 -0500 CDT deployedMyRedis-0.1.0 1.16.0
redis-prod default 1 2024-06-02 15:29:32.948581 -0500 CDT deployedMyRedis-0.1.0 1.16.0
redis-stage default 1 2024-06-02 15:28:56.884545 -0500 CDT deployedMyRedis-0.1.0 1.16.0
shopping-cart default 3 2024-06-02 16:48:13.74729 -0500 CDT deployedShoppingCart-0.1.0 1.16.0
helm history <release-name>
Above we can see that the shopping-cart
release is on revision 3. More info about past revisions of a release can be found via helm history <release-name>
.
Running helm history shopping-cart
gives an output like the below.
$ helm history shopping-cart
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sun Jun 2 16:39:13 2024 superseded ShoppingCart-0.1.0 1.16.0 Install complete
2 Sun Jun 2 16:46:28 2024 superseded ShoppingCart-0.1.0 1.16.0 Upgrade complete
3 Sun Jun 2 16:48:13 2024 deployed ShoppingCart-0.1.0 1.16.0 Upgrade complete
- The first revision was the initial deployment.
- The second updated it to use the 0.0.2 image version.
- The third is the one in which the Redis host was changed.
By default, helm will only remember the 10 most recent revisions of a release. This can be changed by passing --history-max <number>
with helm upgrade
. Using --history-max 0
results in Helm remembering all revisions.
Warning: If you use the --history-max
option, and then subsequently run helm upgrade
again without --history-max
, it will revert to only remembering the previous 10 revisions. If it had been remembering more than 10 revisions, the ones before the most recent 10 will be forever lost.
helm rollback <release-name> <revision>
From the above list of revisions, we know the error was introduced in revision 3, and we'd like to go back to revision 2, the most recent working revision. This can be done with helm rollback <release-name> <revision>
.
Pass in the parameters according to what you saw in helm history
and run it.
$ helm rollback shopping-cart 2
Rollback was a success! Happy Helming!
$ helm history shopping-cart
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sun Jun 2 16:39:13 2024 superseded ShoppingCart-0.1.0 1.16.0 Install complete
2 Sun Jun 2 16:46:28 2024 superseded ShoppingCart-0.1.0 1.16.0 Upgrade complete
3 Sun Jun 2 16:48:13 2024 superseded ShoppingCart-0.1.0 1.16.0 Upgrade complete
4 Sun Jun 2 16:55:49 2024 deployed ShoppingCart-0.1.0 1.16.0 Upgrade complete
After rolling back, we can see that Helm created a revision 4, which is functionally identical to the revision 2 we wanted to rollback to.
Another Warning: helm rollback
also accepts the --history-max
option, and if not specified, it'll also remove remembrance of all but the most recent 10 revisions.
helm uninstall <release-name>
Releases can be uninstalled with helm uninstall <release-name>
, which will remove every object that was put on the cluster as part of the release. You can think of this as the Helm counterpart to kubectl delete -f
.
Go ahead and run helm uninstall shopping-cart
to give yourself a clean slate for the exercise on the next page.