3 Jun 2016 in Deis Workflow, Series: Deis Workflow Basics

Deis Workflow Basics, Part Two

This is the second post in a series on Deis Workflow, the second major release of the Deis PaaS. Workflow builds on Kubernetes and Docker to provide a lightweight PaaS with a Heroku-inspired workflow.

In part one, we gave a brief conceptual overview, including Twelve-Factor apps, Docker, Kubernetes, Workflow applications, the "build, release, run" cycle, and backing services. We also explained the benefits of Workflow. In summary:

Workflow is fast and easy to use. You can deploy anything you like. Release are versioned and rollbacks are simple. You can scale up and down effortlessly. And it is 100% open source, using the latest distributed systems technology.

In this post, we'll take a look at the architecture of Workflow and how Workflow is composed from multiple, independent components.

Read More
31 May 2016 in Deis Workflow, Series: Deis Workflow Basics

Deis Workflow Basics, Part One

Deis Workflow is an open source Platform as a Service (PaaS) that makes it easy to deploy and manage applications on your own servers. Workflow builds on Kubernetes and Docker to provide a lightweight PaaS with a Heroku-inspired workflow.

Deis Workflow is the second major release of the Deis PaaS.

In this miniseries we'll go over the basics of Deis Workflow. That includes: why you'd want to use Workflow, a conceptual overview, a look at architecture and components, and finally, how to install Workflow on a Kubernetes cluster.

Why Use Workflow?

There are five main reasons you might want to use Workflow:

  1. Fast and easy. Supercharge your team with a platform that deploys applications as fast as you can create them. No Kubernetes knowledge is needed.

  2. You can deploy anything. Deploy any language, framework, or Dockerfile with a simple git push. Use deis pull to move existing Docker images through your team's development CI/CD pipeline.

  3. Scale effortlessly. Your application can be scaled up or down with a single command that handles everything for you: deis scale.

  4. Open source. Avoid vendor lock-in with an open source platform that runs on public cloud, private cloud, or bare metal. Contribute to the project if you want to add features and help us set direction.

  5. The latest technology. Benefit from the latest distributed systems technology thanks to a platform that is constantly evolving.

Read More
27 May 2016 in Kubernetes, Intro

Getting Started With Kubernetes

Kubernetes is a very popular open source container management system.

The goal of the Kubernetes project is to make management of containers across multiple nodes as simple as managing containers on a single system. To accomplish this, it offers quite a few unique features such as traffic load balancing, self-healing (automatic restarts), scheduling, scaling, and rolling updates.

In this post, we'll learn about Kubernetes by deploying a simple web application across a multi-node Kubernetes cluster. Before we can start deploying containers however, we first need to set up a cluster.

Read More
26 May 2016 in Helm, Kubernetes, Annoucement

Helm 2 Reaches Alpha 1

This release marks the first in the Helm 2 line. It is an unstable Alpha-quality release that supports the core functionality for the Helm 2 platform.

Helm 2 has two major components:

  • The Helm client, whose responsibility is to provide tooling for working with charts and uploading them to the server.
  • The Tiller server, whose responsibility is to manage releases into the Kubernetes cluster.

Additionally, Helm can fetch charts from remote repositories. A Helm 2 chart repository is simply an HTTP server capable of serving YAML and TGZ files.

As a developer preview, the Alpha 1 release does not have a binary build of its components. The quickest route to get started is to fetch the source, and then run make bootstrap build. To start using Helm, use helm init.

Stay in touch

To keep up with news on Helm, join the #Helm channel on the Kubernetes Slack channel, or join our weekly developer call every Thursday at 9:30-10:00 Pacific.

You are welcome to join! https://engineyard.zoom.us/j/366425549

Click Play

During the May Deis Community meeting I took a few moments to talk about the general direction and core values for the Helm project. Click play for my presentation:

20 May 2016 in Demo, Fault-Tolerance

Cheapest Fault-Tolerant Cluster For Deis V1 PaaS

This guide is targeted to the Deis v1.13 LTS branch. With over 6 million downloads, Deis V1 PaaS has never been more popular. These instructions will not work with Deis Workflow, the Kubernetes-native PaaS, which is still currently in beta.

I am going to demo a cheap, quick, and dirty way to bring up a cluster for Deis V1 PaaS on DigitalOcean. This is the cheapest fault-tolerant configuration that I could manage. That is, a cluster on which some nodes can go down—but the cluster and its platform services remain available!

Before we go on, let's distinguish fault-tolerant from highly available.

Fault-Tolerant or Highly Available?

A fault-tolerant Deis cluster continues operating, even in the event of some partial or complete failure of some of its component nodes. This guide will show you how even a cheap Deis cluster can be fault tolerant—and to what extent.

A highly available (HA) Deis cluster is one that is continuously operational, or never failing. HA is the harder of the two to achieve, and this tutorial won't show you how to do that.

High availability is something that isn't provided out-of-the-box in most cases. If HA is one of your goals, your sysadmin or dev team will be better qualified to say what obstacles you will face in maintaining a standard of high availability for your apps.

Read More