On October 15th, 2015, the project now known as Helm was born. Only one year in, Kubernetes Helm is part of the CNCF, and is marching toward the v2.0.0 release. And in every sense of the word, it is now a community-driven project. But the circumstances behind the creation of Helm read like a script for a Silicon Valley tech comedy.
This is the final Helm alpha release for the 2.0.0 development cycle.
WARNING: This release is not backward compatible. We try hard to avoid compatibility breaks because we know it is an inconvenience to you. But we made a tough call: It is better to correct some of our design flaws now than force everyone to "live with them" for the entirety of the 2.0.0 lifecycle. We apologize for the inconvenience, and we don't plan on any other compatibility breaks between now and 2.0.0.
As always, we want to give a big ❤️ to the community, which has continued to find bugs, submit issues, fix things, and participate in conversations. If you'd like to be a part of the community, we invite you to join the Kubernetes Helm Slack channel, drop in on a Thursday Developer Call, and jump in on the GitHub issue queues.
Please let us know if you find any issues with those guides. We want Helm installation to be an easy and reliable process.
When upgrading to a new version of Tiller, we recommend using kubectl to delete the old one: kubectl delete deployment --namespace kube-system tiller-deploy. This will not delete your releases. From there, running helm init will update you to the latest.
For Mac OSX
We consider Alpha.5 to be feature complete. The developers are now working on fixing bugs, improving stability, and building some great docs. This is a great time to jump in.
Our next release will be v2.0.0-beta.1. And we will be rolling out betas until we feel like the tool is ready for production use.
Last year at KubeCon in San Francisco, I first learnt about Helm—a sort of Homebrew for Kubernetes. It seemed too good to be true, so I dug deeper. Fast forward to today, and I find myself packaging applications with Helm.
In this post, I'll talk briefly about why Helm is so exciting, and then show you how to install and use one of the packages I wrote.
Why Use a Package Manager?
A team I worked with was deploying various components to Kubernetes, including: Zookeeper, etcd, Consul, Cassandra, Kafka, and Elasticsearch. Each one of these components was using a manifest file that someone on the team had written by hand, and then these manifest files had been improved over time. Each change, each improvement, reflecting some sort of knowledge or experience the team had gained.
But there are many teams across the world deploying these same components. And let's face it, most deploy situations are similar enough. So each one of these teams is, for the most part, duplicating each other's work.
But what if there was a way to avoid that? What if we could organise that collective knowledge and bring people together to collaborate on it.
Helm 2.0.0-alpha.4 is the penultimate Helm Alpha release. This new version introduces four major changes:
ConfigMap storage is now the default backend. When you create a release with Helm, the release data will be stored in config maps in the kube-system namespace. This means releases are now persistent.
helm status got a much-needed overhaul, and now provides lots of useful information about the details of a release.
The Tiller server now checks the apiVersion field of manifests before loading a chart into Kubernetes. Now, for example, a chart that uses PetSets will stop early if it detects that the Kubernetes installation does not support PetSets.
Helm can now cryptographically verify the integrity of a packaged chart using a provenance file. To this end, helm package now has a --sign flag, and several commands now have a --verify flag.
In addition to these, the Helm community has added numerous improvements and bug fixes, including:
Fixing a bug that prevented some installations of Alpha.3 from executing helm list
Limiting the length of a release name
Adding an icon: field to Chart.yaml
Improving helm lint and helm upgrade
During this cycle, the Kubernetes Helm community surpassed 50 code contributors, many of whom have contributed multiple PRs! We cannot thank you enough. ❤️
This is the second release of Helm that includes pre-built client binaries.
To get started, download the appropriate client from the release, unpack it, and then initialize Helm:
This will configure your local Helm, and also install and configure the in-cluster Tiller component.
The next release, Alpha.5, marks the last major feature release before we focus on stability. You can expect to see helm rollback implemented, along with better version support, and the addition of a dependencies: section in Chart.yaml.
After Alpha.5, the Helm team will focus on closing bugs and improving stability as we sail toward a Helm 2.0.0 final release.
Helm v2.0.0-Alpha.3 has many new features and improvements. It marks our biggest release yet. The Helm team owes a tremendous debt of gratitude to our outstanding community, which has been a source of ideas, issues, fixes, features, and encouragement. Thank you!
Alpha.3 also includes the first set of released binaries which means you no longer have to compile the project to start kicking the tires. Check out "Getting Involved" section for details.
The headliner features are:
A new helm upgrade command can upgrade releases in place. We suggest using Kubernetes Deployments for maximum impact.
A vastly improved helm status command shows you information about the current state of your releases.
Helm now has commands for getting information about a release using helm get, helm get values, helm get hooks, and helm get manifest.
By default, releases are still stored in memory. But they may now optionally be stored in Kubernetes ConfigMaps instead. In subsequent releases, ConfigMaps will become the default.
The new helm inspect command allows users to preview chart information before installing a chart: helm inspect kube-charts/alpine-0.1.0
Tiller now installs into the kube-system namespace, but can install charts into any namespace it has write access to.
Helm supports hooks for pre-install, post-install, pre-upgrade, post-upgrade, pre-delete, and post-delete. With these, you can now attach Kubernetes jobs to release events.
This release marks the second of four planned Alpha releases. We have made a lot of progress (and a lot of changes) since Alpha.1. Here are the highlights:
helm lint has gotten a major overhaul. The core architecture is now considered stable, and the linter team is transitioning focus to (a) adding rules, and (b) integrating linting into the chart development workflow.
Helm's server-side Tiller component can now be installed into any namespace. Alpha.1 restricted Tiller to the helm namespace. Now Tiller is installed into the user's configured namespace (usually default) by default, but can be installed into any namespace.
Values files are now in YAML format (bye-bye TOML). We're experimenting with support for globally scoped variables.
Templates now support more functions (Sprig 2.3). We still have a few big changes coming to the template system, but the new docs/examples/nginx template provides a better example of how we envision template support.
helm install can now install directly from a chart repository.
Helm charts now support .helmignore files, which are similar to .gitignore files, providing a convenient way to tell Helm about files that should not be packaged into the chart.
Tiller has liveness and readiness probes for Kubernetes
This release marks the first in the Helm 2 line. It is an unstable Alpha-quality release that supports the core functionality for the Helm 2 platform.
Helm 2 has two major components:
The Helm client, whose responsibility is to provide tooling for working with charts and uploading them to the server.
The Tiller server, whose responsibility is to manage releases into the Kubernetes cluster.
Additionally, Helm can fetch charts from remote repositories. A Helm 2 chart repository is simply an HTTP server capable of serving YAML and TGZ files.
As a developer preview, the Alpha 1 release does not have a binary build of its components. The quickest route to get started is to fetch the source, and then run make bootstrap build. To start using Helm, use helm init.
Stay in touch
To keep up with news on Helm, join the #Helm channel on the Kubernetes Slack channel, or join our weekly developer call every Thursday at 9:30-10:00 Pacific.
Helm 0.3.0 was released last week, and 0.3.1 was released this week with a few minor bug fixes.
The 0.3 release line of Helm introduces several improvements to linting. It also introduces two new Helm commands: helm generate and helm template. These pave the way for generic template support in Helm, and provide a plugin architecture for implementing arbitrary template engines. Also, Helm charts now have a source: field for specifying a URL to the source used to create the chart's resources.
In addition to these new features, many bugs in the 0.2 release line have been found and fixed. Several parts of the codebase have been refactored for easier maintainability and better testing.
Earlier this week, Deis released Helm—the package manager for Kubernetes. Conceptually similar to Homebrew, Helm makes it easy to install standard workloads on Kubernetes.<!--more-->
But... Kubernetes is a container platform, so why does it need a package manager?
Perhaps looking at OS-level package managers (like Homebrew, apt, yum/rpm, ports, and so on) will help explain the situation.
Why use apt, yum, or homebrew?
Let's take a typical scenario.
I'm sat at the terminal in the chilly server room. I tried the command again: ./configure && make. Page after page of information scrolled across the screen. Apache httpd was building. I flipped open my book to read a few pages while I waited. Several minutes later, I saw the make command fail. I just wanted a stock Apache httpd server, but I couldn't figure the right combination of build flags, nor could I find and install all of the correct dependencies.
In frustration, I gave up and tried the radical approach: I switched operating systems.
Then when the time came to install Apache httpd, I simply typed apt-get install apache. And hey presto! It worked. If I needed to make changes, I could head to the /etc/httpd directory and configure away. But even prior to that I had a functioning web server. Apache httpd was working right out of the box.
We at Deis are really excited about Kubernetes. In fact, we're hard at work building Deis v2 on top of a Kubernetes base. During this integration, we developed a tool that we think seasoned Kubernetes users will enjoy, and newcomers can use as an onramp for running containerized applications. And today, we're thrilled to announce this new tool.
We call it Helm, and it's the first package manager for Kubernetes.
Inspired by Homebrew, apt, and npm, Helm is a tool for working with Kubernetes-powered applications. It works like this:
A Helm package is bundled up as a chart.
The charts are collected together in a repository that you can search. Helm uses git under the hood for storing and organizing chart data.
Using the helm tool, you can find, customize, manage and install these charts.
Helm makes it easy run apps and services inside Kubernetes.
We built Helm to help with two things.
First, we want to make it simple to share information about running common apps and services inside of Kubernetes. When we all share our charts, the Kubernetes community at large learns how best to work with Kubernetes. We share information and discover great ways of doing things. And we also make it easier for newcomers to get going. Helm is about growing Kubernetes.
Second, we want to make it easier for teams to manage their Kubernetes manifest files. So we created a tool that eases the process of collaborating on and keeping track of your team's charts. Start with widely available charts, customize them to your team's needs, and then store them in your own version control. Helm is about helping teams.
Kubernetes is a powerful platform. We want to make it easy to manage the apps and services you deploy.