7 Sep 2016 in Helm, Kubernetes

Trusting Who's at the Helm

Last year at KubeCon in San Francisco, I first learnt about Helm—a sort of Homebrew for Kubernetes. It seemed too good to be true, so I dug deeper. Fast forward to today, and I find myself packaging applications with Helm.

In this post, I'll talk briefly about why Helm is so exciting, and then show you how to install and use one of the packages I wrote.

Why Use a Package Manager?

A team I worked with was deploying various components to Kubernetes, including: Zookeeper, etcd, Consul, Cassandra, Kafka, and Elasticsearch. Each one of these components was using a manifest file that someone on the team had written by hand, and then these manifest files had been improved over time. Each change, each improvement, reflecting some sort of knowledge or experience the team had gained.

But there are many teams across the world deploying these same components. And let's face it, most deploy situations are similar enough. So each one of these teams is, for the most part, duplicating each other's work.

But what if there was a way to avoid that? What if we could organise that collective knowledge and bring people together to collaborate on it.

Read More
6 Sep 2016 in Helm, Annoucement

Helm Alpha.4: Persistent Storage, Improved Status, Provenance Files, and API Version Checking

Helm 2.0.0-alpha.4 is the penultimate Helm Alpha release. This new version introduces four major changes:

  • ConfigMap storage is now the default backend. When you create a release with Helm, the release data will be stored in config maps in the kube-system namespace. This means releases are now persistent.
  • helm status got a much-needed overhaul, and now provides lots of useful information about the details of a release.
  • The Tiller server now checks the apiVersion field of manifests before loading a chart into Kubernetes. Now, for example, a chart that uses PetSets will stop early if it detects that the Kubernetes installation does not support PetSets.
  • Helm can now cryptographically verify the integrity of a packaged chart using a provenance file. To this end, helm package now has a --sign flag, and several commands now have a --verify flag.

In addition to these, the Helm community has added numerous improvements and bug fixes, including:

  • Fixing a bug that prevented some installations of Alpha.3 from executing helm list
  • Limiting the length of a release name
  • Adding an icon: field to Chart.yaml
  • Improving helm lint and helm upgrade

During this cycle, the Kubernetes Helm community surpassed 50 code contributors, many of whom have contributed multiple PRs! We cannot thank you enough. ❤️

Getting Started

This is the second release of Helm that includes pre-built client binaries.

To get started, download the appropriate client from the release, unpack it, and then initialize Helm:

$ helm init

This will configure your local Helm, and also install and configure the in-cluster Tiller component.

What's Next

The next release, Alpha.5, marks the last major feature release before we focus on stability. You can expect to see helm rollback implemented, along with better version support, and the addition of a dependencies: section in Chart.yaml.

After Alpha.5, the Helm team will focus on closing bugs and improving stability as we sail toward a Helm 2.0.0 final release.

30 Aug 2016 in Kubernetes, logging, Elasticsearch, Kibana

Kubernetes Logging With Elasticsearch and Kibana

Developers, system administrators, and other stakeholders need to access system logs on a daily (sometimes even hourly) basis.

Logs from a couple of servers are easy to generate and handle. But, imagine a Kubernetes cluster with several pods running. Handling and storing logs in itself becomes a huge task.

How does the system administrator collect, manage, and query the logs of the system pods? How does a user query the logs of their application, which is composed of many pods, all of which may be restarted or automatically created by Kubernetes?

Thankfully, Kubernetes supports cluster level logging.

Cluster level logging allows collecting logs for all the pods in the cluster, at a centralized location, with options to search and view the logs.

There are two ways you can use the cluster level logging: either via Google Cloud Logging or Elasticsearch and Kibana.

It is important to note here that the default logging components may vary based on the Kubernetes distribution you are using. For example, if you are running Kubernetes on a Google Container Engine (GKE) cluster, you'll have Google Cloud Logging. But if you're running your Kubernetes cluster on AWS EC2, you'll have Elasticsearch and Kibana available by default.

In this post we'll focus on Kubernetes on an AWS EC2 instance. We'll learn how to set up a Kubernetes cluster on an AWS EC2 instance and ingest logs into Elasticsearch and view them using Kibana.

But first, let's make some introductions.

Read More
29 Aug 2016 in Workflow, Stackpoint Cloud

Stackpoint Cloud and Deis Workflow

Stackpoint Cloud provides a simple and easy to use interface to provision Kubernetes on a variety of clouds. Whether you are on Amazon, Azure, Digital Ocean, or Packet, Stackpoint Cloud is a great way to get started with Kubernetes.

We announced a new collaboration at the beginning of August. Not slowing down, Stackpoint has added support for Kubernetes 1.3.5 and Deis Workflow 2.4.

This guide shows just how easy it is to bring up a Stackpoint Kubernetes cluster with Deis Workflow automatically installed and configured.

Stackpoint and Workflow Overview

Read More
24 Aug 2016 in Workflow

Private Applications on Workflow

Last week, we released Workflow v2.4.

In Workflow v2.4, we added something called deis routing. This feature allows you to add or remove an application from the routing layer. If you remove an application from the routing layer, it continues to run within the cluster while being unreachable from the outside world.

If you remove an application from the routing layer, the application is still reachable internally thanks to Kubernetes services. This allows for some pretty neat interactions where users can run internal service APIs or backing services like postgres without exposing it to the outside world.

In this post, I'll take a closer look at this new feature and show you how and why you'd want to use it with your application.

Read More