Getting Started Authoring Helm Charts

21 Oct 2016

Helm is the package manager for Kubernetes. In my last post, I showed you how to install a Helm chart. Check that out if you're new to Helm or you just want a refresher.

In this post, I'm going to dig into the Helm chart authoring process to illustrate just how easy it is to get started.

I will build on our previous example: Apache Spark.

Since publishing my last post, the Spark chart is now available in the official Kubernetes chart repo. So I'll use this chart as a working example, and walk you through the bits and pieces that make it up.

Using Helm to Create Charts

Helm provides several tools to streamline the chart authoring experience.

To get things started, you can create a chart scaffold, like so:

$ helm create mychart

Created mychart/

This will create a number of required files and directories for you. Charts may also include optional files like a LICENSE file or a README.md file. Check the docs for more info on what you can include.

As you author charts, Helm provides a linting feature that can help you find issues with your chart's formatting or templates:

$ helm lint mychart

No issues found

Once you have edited a chart, Helm can package it into a chart archive for you:

$ helm package mychart

Archived mychart-0.1.-.tgz

Digging Through the Spark Chart

Let's grab a copy of the Spark chart and dig around inside it.

Clone the Kubernetes chart repo:

git clone https://github.com/kubernetes/charts.git

Now, change into the spark directory:

cd charts/incubator/spark

Here's what you will find:

$ tree .
.
├── Chart.yaml
├── README.md
├── templates
│   ├── spark-master-deployment.yaml
│   ├── spark-worker-deployment.yaml
│   └── spark-zeppelin-deployment.yaml
└── values.yaml

1 directory, 6 files

Let's look at these files and directories one by one and see what they do.

Chart.yaml

The Chart.yaml file contains metadata about the chart.

Here's what the Spark chart defines:

name: spark
home: http://spark.apache.org/
version: 0.1.1
description: A Apache Spark Helm chart for Kubernetes. Apache Spark is a fast and general-purpose cluster computing system
sources:
  - https://github.com/kubernetes/kubernetes/tree/master/examples/spark
  - https://github.com/apache/spark
maintainers:
  - name: Lachlan Evenson
    email: lachlan.evenson@gmail.com

The fields should be fairly self-explanatory. Check the docs for a complete reference. At the very minimum, a chart must have a name and version number.

templates

The templates directory is for template files.

These template files are processed according to standard Go and sprig template conventions. See the Go text/template, Sprig functions, and Helm docs for more info.

The Spark chart has three YAML files:


$ tree templates/
templates/
├── spark-master-deployment.yaml
├── spark-worker-deployment.yaml
└── spark-zeppelin-deployment.yaml

0 directories, 3 files

Each one of these files is a templated Kubernetes manifest.

Let's take a peek at the spark-master-deployment.yaml to see what's going on here:

apiVersion: v1
kind: Service
metadata:
  name: "{{ printf "%s-%s" .Release.Name .Values.Master.Name | trunc 24 }}"
  labels:
    heritage: {{.Release.Service | quote }}
    release: {{.Release.Name | quote }}
    chart: "{{.Chart.Name}}-{{.Chart.Version}}"
    component: "{{.Release.Name}}-{{.Values.Master.Component}}"
  annotations:
    "helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
  ports:
    - port: {{.Values.Master.ServicePort}}
      targetPort: {{.Values.Master.ContainerPort}}
  selector:
    component: "{{.Release.Name}}-{{.Values.Master.Component}}"
[...]

Templated values are surrounded by curly braces. These values are rendered from values predefined by Helm or values supplied in the values.yaml file.

You can also use functions, for example trunc. These functions are added to the template engine as part of the sprig template library.

values.yaml

Helm's templating engine reads the values.yaml file and substitutes that data into your templates. For example: replica count, Docker image tags, CPU limits, and so on.

The Spark chart makes use of YAML scoping to set data for each of the different components that comprise the chart: Master, WebUi, Worker, and Zeppelin.

# Default values for spark.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value

Master:
  Name: spark-master
  Image: "gcr.io/google_containers/spark"
  ImageTag: "1.5.1_v3"
  Replicas: 1
  Component: "spark-master"
  Cpu: "100m"
  Memory: "512Mi"
  ServicePort: 7077
  ContainerPort: 7077

WebUi:
  Name: spark-webui
  ServicePort: 8080
  ContainerPort: 8080

[...]

If you look back at spark-master-deployment.yaml, you can see, for example, the Master Name value referenced as {{.Values.Master.Name}}.

For more about how values are scoped, and how they work with chart dependencies, check out the docs.

Other Features

Dependencies

If your chart depends on other charts, those dependencies must be placed in a charts directory.

You can do this by including an unpackaged chart directory structure (like the one we just looked at) or as a packaged chart archive (created from a chart directory structure with the helm package command) .

The Spark chart doesn't have any dependencies so there is no charts directory.

Hooks

Helm also provides a hook mechanism to allow a chart to take specific actions at certain points during its lifecycle.

A hook is just a template that has been attached to a pre- or post- hook for an install, delete, upgrade, or rollback action.

You can attach a template to a hook with an annotation:

apiVersion: batch/v1
kind: Job
metadata:
  name: ""
  labels:
    heritage: 
    release: 
    chart: "-"
  annotations:
    # This is what defines this resource as a hook. Without this line, the
    # job is considered part of the release.
    "helm.sh/hook": post-install
[...]

Here, the "helm.sh/hook": post-install line attaches this template to the post-install hook, and the template will be run after the chart has been installed. Without this line, this is just a regular template.

For more information, check out the docs.

Wrap Up

In this post, we walked through a simple chart and learnt what all the pieces do. This should be enough to get you started on a path to writing your first chart.

If you're looking to kickstart your authoring journey, the Bitnami chart repo provides a wealth of information and examples.

And whether you're writing your very first chart or if you're a Helm veteran, we always welcome contributions to the kubernetes/charts repo!

The chart authoring community is still young and developing best practices. We have collected a few tips and tricks. We would love to hear your experiences authoring charts. Pull requests encouraged!

In my next post, I'll show you how to create your very own charts repo.

Stay tuned!

Posted in Helm

triangle square circle

Did you enjoy this post?