28 Apr 2016 in Deis Workflow, Announcement, Kubernetes

Deis Workflow, Beta 3

Time keeps on slippin', slippin', slippin', into the future. But not Deis Workflow Beta releases.

The team just cut Beta 3 of Deis Workflow. We've been happy with the two-week release cadence. Keep your eyes out for Beta 4 due May 11th and our Release Candidate May 25th.

Now, for beta highlights!

Read More
27 Apr 2016 in Kubernetes, Overview, Series: Kubernetes Overview

Kubernetes Overview, Part One

Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster.

Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure the state of the cluster continually matches the user's intentions.

Kubernetes enables you to respond quickly to customer demand by scaling or rolling out new features. It also allows you to make maximal use of your hardware.

Kubernetes is:

  • Lean: lightweight, simple, accessible
  • Portable: public, private, hybrid, multi cloud
  • Extensible: modular, pluggable, hookable, composable, toolable
  • Self-healing: auto-placement, auto-restart, auto-replication

Kubernetes builds on a decade and a half of experience at Google running production workloads at scale, combined with best-of-breed ideas and practices from the community.

Kubernetes supports Docker and rkt containers, with more container types to be supported in a future.

In this miniseries, we’ll cover Kubernetes from the ground up.

Let’s start with the basic components.

Read More
21 Apr 2016 in Deis Workflow, Helm, Wercker, Continuous Deployment

Continuous Deployment With Helm Classic, Deis Workflow, and Wercker

Deis Workflow is already in GA for a while. But what is it like to work with? Well, I created an example repository on GitHub to demo some functionality.

Using this example, we'll build a simple, multi-tier web application using Helm Classic, Deis Workflow, and Wercker for continuous deployment.

When we finish, we'll have:

  • A backend Redis cluster (for storage)
  • A web frontend (installed as a Deis Workflow app) that interacts with Redis via JavaScript
  • Wercker for continuous deployment of your Docker image to Deis Workflow
Read More
14 Apr 2016 in Series: Schedulers, Schedulers, Kubernetes

Schedulers, Part 2: Kubernetes

In my previous post I introduced the concept of scheduling and took a look at two basic monolithic schedulers: fleet and swarm. In summary: schedulers are responsible for distributing jobs across a cluster of nodes. However, basic monolithic schedulers, by design, have limits on performance and throughput.

In this post we take a look at how Kubernetes improves on the basic monolithic design.

Intro to Kubernetes

Kubernetes is a tool for managing Linux containers across a cluster.

Originally developed by Google, Kubernetes is lightweight, portable, and massively scalable. The design is highly decoupled and can split into two main components: a control plane and worker node services. The controle plane which takes care of assigning containers to nodes and manages cluster configuration. Worker node services, which run on the individual machines in your cluster, manage the local containers.

Within Kubernetes, we have the concept of pod. This is a group of colocated containers, like a pod of whales, or a pod of peas. Containers in the same pod share the same namespace. Namespaces are used for service discovery and segregation.

Kubernetes requires a dedicated subnet for each node—or an overlay network. This is so that each pod can get a unique IP address in the cluster. This can be achieved by deploying Kubernetes on a cluster with Flannel, OpenVSwitch, Calico, OpenContrail, or Weave. For more details about Kubernetes networking, see the docs.

Let’s take a more detailed look at all this.

Read More
12 Apr 2016 in Series: Fleet on CoreOS, Fleet, CoreOS

Fleet on CoreOS, Part Two

In my previous post, we learnt how fleet automatically reshuffles services across your cluster whenever a node fails to keep your app available. To be more specific, the code running on the failed node is automatically moved to one of the other healthy nodes in the cluster, and from the outside, your app continues to run smoothly.

If you’re interested in understanding more about how fleet fits into CoreOS, we go into that in a previous post about self-sufficient containers.

In this post, I explain that commands you can use to interact with fleet. This will lay the foundation for more advanced uses of fleet in subsiquent posts. But, before diving into commands, let's revisit unit files. This is important because most of the fleet commands are about handling unit files.

Read More