Helm and Kustomize both try to solve similar issues in the management of variants but have different philosophies in their approaches. Helm’s ambition is to be the de-facto package manager for Kubernetes with the whole ecosystem needed for that. It not only bundles all resources defining an application in their own package format but also has the means and components to share these with others and to manage versioning. Like a Linux Package Manager, it is possible to search for specific versions of packages, install them and list what is already installed.

Kustomize, on the other hand, has a much simpler (some might say more limited) approach. It focuses on the simplification of yaml file handling, essentially the generation of customized resource definitions.

What is Kustomize?

Kustomize helps with managing variants of Kubernetes resources without the need for templates. This is perhaps its greatest strength and weakness at the same time. Some features that Helm offers cannot be provided by Kustomize due to this limitation. For example, the usage of control structures like loops or conditional blocks. Nevertheless, Kustomize keeps customizing simple by using fully valid yaml structures.

Some of its features and limitations are:

Basic example

Kustomize uses an entry file called kustomization.yaml to manage a collection of resource files (normally grouped within a directory). You can manually create this file in the same directory as your resource yaml files or use the following command:

> kustomize init

It will generate a basic file with the following content

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization
kustomization.yaml

With this basic file we can’t do anything useful, so let’s add some resources to our example:

apiVersion: v1

kind: Pod

metadata:

  name: app

spec:

  containers:

  - name: app

    image: nginx:1.14.2

    ports:

    - containerPort: 80
pod.yaml
apiVersion: v1

kind: Service

metadata:

  name: app

spec:

  selector:

    app: app

  ports:

  - port: 80

    targetPort: 80
service.yaml

These resource can then be referenced in the kustomization.yaml.

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization

resources:

- pod.yaml

- service.yaml
kustomization.yaml

If we now run kustomize build in the same folder as the kustomization.yaml we get the following result:

> kustomize build

apiVersion: v1
kind: Service
metadata:
  name: app
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: app
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - image: nginx:1.14.2
    name: app
    ports:
    - containerPort: 80

The output is a combined rendered yaml document with all the referenced resources. If we want to apply all resources managed by Kustomize directly to a Kubernetes cluster, we can run:

> kustomize build | kubectl apply -f -

service/app created
pod/app created

This is not very spectacular at the moment and does not look helpful in comparison to the pure yaml files, but it already brings some advantages:

On a side note to the file references to resources in the kustomization.yaml, these are not only limited to local files but can also reference files which are available via http(s), e.g. if they are located in a different repository or are provided by a 3rd party instance.

For example, if we want to include the nginx ingress controller resources to Kustomize, we can do so by adding the reference in our kustomization.yaml like this:

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization

resources:

- https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
kustomization.yaml

When running the build again, we see all the resources defined in the deploy.yaml. This means all the features that we will see later on can be used in the same way for external resources as they are for internal resources.

So let’s see what else we can do with Kustomize.

ConfigMaps and Secrets

ConfigMaps and Secrets can be added just like any other resource but creating a ConfigMap or Secret resource as yaml is quite cumbersome. Kustomize provides some generators to create a ConfigMap or Secret resource for us based on some input key/values.

In the simplest form we can use a generator and just inline the key/values in the kustomization.yaml like this:

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization

configMapGenerator:

- name: app-cm

  literals:

  - MY_CONFIG_1=config one

  - MY_CONFIG_2=config two
kustomization.yaml

With this configuration, Kustomize will generate a ConfigMap resource when running the build command and we get this result:

> kustomize build

apiVersion: v1
data:
  MY_CONFIG_1: config one
  MY_CONFIG_2: config two
kind: ConfigMap
metadata:
  name: app-cm-624tfbcc9t

You can see that the generated resource has the name as defined in the kustomization.yaml but with an added suffix. This fingerprint is based on the hashed content of the ConfigMap and is per default activated in Kustomize. This can help to avoid accidentally overriding an existing ConfigMap. It would also automatically restart all pods linked to the ConfigMap as the reference changes. If we reference the ConfigMap in other resources, Kustomize takes care that the suffix is added correctly in all these places. Let’s extend the pod.yaml from the first example like this:

apiVersion: v1

kind: Pod

metadata:

  name: app

spec:

  containers:

  - name: app

    envFrom:

    - configMapRef:

        name: app-cm

    image: nginx:1.14.2

    ports:

    - containerPort: 80
pod.yaml

and add it as resource in the Kustomize.yaml. If we run again the build, we see the reference to the ConfigMap gets automatically updated in the Pod definition

> kustomize build

apiVersion: v1
data:
  MY_CONFIG_1: config one
  MY_CONFIG_2: config two
kind: ConfigMap
metadata:
  name: app-cm-624tfbcc9t
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - envFrom:
    - configMapRef:
        name: app-cm-624tfbcc9t
    - secretRef:
        name: app-secret-78585fhggh
    image: nginx:1.14.2
    name: app
    ports:
    - containerPort: 80

Similarly, we can also create a Secret by using the secretGenerator .

Let’s create a Secret but instead of inline key/values, we let Kustomize read them from a properties file. The properties file look like this:

MY_SECRET=very secret
secrets.properties

We then can configure the generator in kustomization.yaml to read the properties

secretGenerator:

- name: app-secret

  envs:

  - secrets.properties
kustomization.yaml

When running the build we get the following output

> kustomize build

apiVersion: v1
data:
  MY_CONFIG_1: config one
  MY_CONFIG_2: config two
kind: ConfigMap
metadata:
  name: app-cm-624tfbcc9t
---
apiVersion: v1
data:
  MY_SECRET: dmVyeSBzZWNyZXQ=
kind: Secret
metadata:
  name: app-secret-78585fhggh
type: Opaque
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - envFrom:
    - configMapRef:
        name: app-cm-624tfbcc9t
    - secretRef:
        name: app-secret-78585fhggh
    image: nginx:1.14.2
    name: app
    ports:
    - containerPort: 80

Additionally, there are configuration parameters that we can set to manipulate the generation of ConfigMaps and Secrets. A block can be added to the kustomization.yaml like this:

...

generatorOptions:

  labels: # adds labels to the generated resources

    my.label: mylabel

  annotations: # adds annotations to the generated resources

    my.annotation: myannotation

  disableNameSuffixHash: true # disables the suffix generation

  immutable: true # marks the resources as immutable
kustomization.yaml

With this added configuration, the result then looks like this:

> kustomize build

apiVersion: v1
data:
  MY_CONFIG_1: config one
  MY_CONFIG_2: config two
immutable: true
kind: ConfigMap
metadata:
  annotations:
    my.annotation: myannotation
  labels:
    my.label: mylabel
  name: app-cm
---
apiVersion: v1
data:
  MY_SECRET: dmVyeSBzZWNyZXQ=
immutable: true
kind: Secret
metadata:
  annotations:
    my.annotation: myannotation
  labels:
    my.label: mylabel
  name: app-secret
type: Opaque
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - envFrom:
    - configMapRef:
        name: app-cm
    - secretRef:
        name: app-secret
    image: nginx:1.14.2
    name: app
    ports:
    - containerPort: 80

These generators can be quite handy in daily work as we can share the same properties files between resource generation and other places like the pipeline or scripts.

Override image tags and replica count

Let’s say we use a specific image tag not only in one resource such as our pod.yaml but in several other places and we want to update them all at the same time with the same version. We could do that by manually altering the version in every file. Kustomize provides a simpler solution for that.

Let’s extend our example by adding a deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: another-app

spec:

  selector:

    matchLabels:

      app: another-app

  template:

    metadata:

      labels:

        app: another-app

    spec:

      containers:

      - name: app

        image: nginx

        ports:

        - containerPort: 80
deployment.yaml

and we also update the pod.yaml

apiVersion: v1

kind: Pod

metadata:

  name: app

spec:

  containers:

  - name: app

    envFrom:

    - configMapRef:

        name: app-cm

    - secretRef:

        name: app-secret

    image: nginx

    ports:

    - containerPort: 80
pod.yaml

You can see that we are no longer referencing a specific version of the nginx image. If we now want to update the nginx tag to a specific version we can do so by adding the following to the kustomization.yaml

...

images:

- name: nginx

  newTag: 1.21.6
kustomization.yaml

When running the build, Kustomize will then update every usage of the nginx image with the new tag

> kustomize build

apiVersion: apps/v1
kind: Deployment
metadata:
  name: another-app
  selector:
    matchLabels:
      app: another-app
spec:
  template:
    metadata:
      labels:
        app: another-app
    spec:
      containers:
      - image: nginx:1.21.6
        name: app
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - envFrom:
    - configMapRef:
        name: app-cm
    - secretRef:
        name: app-secret
    image: nginx:1.21.6
    name: app
    ports:
    - containerPort: 80

There is a similar solution if we want to update the replica count of a deployment without updating the resource file directly.

...

replicas:

- name: another-app

  count: 3
kustomization.yaml

The output will then be:

> kustomize build

apiVersion: apps/v1
kind: Deployment
metadata:
  name: another-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: another-app
  template:
    metadata:
      labels:
        app: another-app
    spec:
      containers:
      - image: nginx:1.21.6
        name: app
        ports:
        - containerPort: 80

This can help in case we do not have direct access to the resource e.g. if it is loaded from another git repository.

Modify Prefix, Suffix and Namespace

This is a quick one. In a lot of cases, we get requirements from an operation team that the resource names must follow a specific naming convention by adding a specific prefix or suffix and that our resources must be set in a specific namespace.

Instead of defining this is in all resource files, we can use the following configuration in our kustomization.yaml

...

namePrefix: myprefix-

nameSuffix: -mysuffix

namespace: my-namespace
kustomization.yaml

and the output will be:

> kustomize build

apiVersion: v1
kind: Pod
metadata:
  name: myprefix-app-mysuffix
  namespace: my-namespace
spec:
  containers:
  - envFrom:
    - configMapRef:
        name: app-cm
    - secretRef:
        name: app-secret
    image: nginx:1.21.6
    name: app
    ports:
    - containerPort: 80

Add Labels and Annotations

Oftentimes, we also want to add labels or annotations to all of our resources, e.g. to mark them as part of a specific application or team, or to mark them with tags for cost allocation. We have already seen how this can be achieved for generated resources like ConfigMaps or Secrets with the generatorOptions. This can also be done for all other resources directly linked in the kustomization.yaml by adding the following:

...

commonLabels:

  my.label: mylabel

commonAnnotations:

  my.annotation: myannotation
kustomization.yaml

The output is:

> kustomize build

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    my.annotation: myannotation
  labels:
    my.label: mylabel
  name: another-app
spec:
  selector:
    matchLabels:
      app: another-app
      my.label: mylabel
  template:
    metadata:
      annotations:
        my.annotation: myannotation
      labels:
        app: another-app
        my.label: mylabel
    spec:
      containers:
      - image: nginx:1.21.6
        name: app
        ports:
        - containerPort: 80

You can see the new annotations are added to the metadata block as well as to the selector block. This is the default behaviour of Kustomize and you should keep this in mind to understand that the matching of pods and deployments will change with that feature. Once you apply this Deployment to the cluster, the labels cannot be extended or changed as the selector block is immutable.

Modify resources for each environment

Until now, we have only managed and modified resources for a single environment but with Kustomize we can use the same set of resources and update them as required for each environment individually.

To do this, we split our resource definitions into shared definitions (base) used by all environments and patches (overlays) used for specific environments. A common directory structure looks like this:

tree
.
├── base
│   ├── kustomization.yaml
│   └── pod.yaml
└── overlays
    ├── integration
    │   ├── kustomization.yaml
    │   └── patch-pod.yaml
    └── sandbox
        ├── kustomization.yaml
        └── patch-pod.yaml

The base directory contains our normal application. Let’s have a look at the kustomization.yaml in one of the overlays:

apiVersion: kustomize.config.k8s.io/v1beta1

kind: Kustomization

resources:

- ../../base

patchesStrategicMerge:

- patch-pod.yaml
overlays/integration/kustomization.yaml

It references the base directory as a resource which means all base resources are included. The second part is a patch file which will be merged with a resource defined in base. The patch files look like this in the different overlays:

apiVersion: v1

kind: Pod

metadata:

  name: app

spec:

  containers:

  - name: app

    resources:

      memory: "64Mi"

      cpu: "500m"
overlays/integration/patch-pod.yaml
apiVersion: v1

kind: Pod

metadata:

  name: app

spec:

  containers:

  - name: app

    resources:

      memory: "32Mi"

      cpu: "250m"
overlays/sandbox/patch-pod.yaml

We can see that these are valid yaml files but not a complete definition of a pod as expected. Both define a resources block which shall be merged with the pod definition in the base directory.

To allow Kustomize to recognize which pod we want to patch, we must provide some information such as apiVersion, kind, metadata.name and additionally in our example spec.containers.name. That’s why the name is the same as the pod in base but we do not have to duplicate any other information from the base resource definition.

So, if we run the build in the overlay directories, we get

> cd overlays/integration
> kustomize build

apiVersion: v1
kind: Pod
metadata:
  name: myprefix-app-mysuffix
  namespace: my-namespace
spec:
  containers:
    image: nginx:1.21.6
    name: app
    ports:
    - containerPort: 80
    resources:
      cpu: 500m
      memory: 64Mi

respectively

> cd overlays/sandbox
> kustomize build

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - image: nginx:1.21.6
    name: app
    ports:
    - containerPort: 80
    resources:
      cpu: 250m
      memory: 32Mi

So, we see that we have a simple approach to manage different versions without having to repeatedly copy the whole structure. The strength is the simplicity as we know how the merging process works but lacks complex modification strategies.

In an overlay, we can use all the features we have seen before:

Additionally, as with external resources described in the first example, we can also link an external base to our overlays. This supports a kind of sharing of the same base resources for several overlays in different repositories which may be helpful in some scenarios.

Additional CLI commands

The last topic I want to address is how we can set some values dynamically. Until now, we have set all values we want to override statically in the files but in real life there are some values that we only know when a build is running in a pipeline.

A typical case is the image tag that we want to use for the deployment. Let’s say we use the current git commit SHA to tag the docker image of our application. This information is only available when we commit and push the change. So we cannot set the image tag upfront statically in our kustomization.yaml.

For these use cases, Kustomize provides some specific CLI commands that we can use to make some modifications on-the-fly in a pipeline. For the image problem above, we can do the following:

kustomize edit set image app=app:$(git rev-parse --short HEAD)

This dynamically sets the image tag in the kustomization.yaml. Let’s say that the commit hash is 68b4c528 , the new kustomization.yaml would look like this

...

images:

- name: app

  newName: app

  newTag: 68b4c528
kustomization.yaml

When we then run the build command, the image tag will be set to the commit hash as needed.

The edit command is quite powerful. We can:

So, whilst we do not have the full flexibility and control of a template language, we have quite a powerful toolset to adapt the results as we need them which covers a wide majority of our typical use cases.

Conclusion

Kustomize can be an alternative to Helm when we do not need the surrounding ecosystem that Helm offers. Kustomize is typically easier to handle and to understand because we do not have to learn a template language and do not have a mixture of yaml structures and template control blocks. It can also help to structure our files if we already work with plain yaml files and can be a good base for a GitOps approach due to its reproducible output.