14 Apr 2016 in Series: Schedulers, Schedulers, Kubernetes

Schedulers, Part 2: Kubernetes

In my previous post I introduced the concept of scheduling and took a look at two basic monolithic schedulers: fleet and swarm. In summary: schedulers are responsible for distributing jobs across a cluster of nodes. However, basic monolithic schedulers, by design, have limits on performance and throughput.

In this post we take a look at how Kubernetes improves on the basic monolithic design.

Intro to Kubernetes

Kubernetes is a tool for managing Linux containers across a cluster.

Originally developed by Google, Kubernetes is lightweight, portable, and massively scalable. The design is highly decoupled and can split into two main components: a control plane and worker node services. The controle plane which takes care of assigning containers to nodes and manages cluster configuration. Worker node services, which run on the individual machines in your cluster, manage the local containers.

Within Kubernetes, we have the concept of pod. This is a group of colocated containers, like a pod of whales, or a pod of peas. Containers in the same pod share the same namespace. Namespaces are used for service discovery and segregation.

Kubernetes requires a dedicated subnet for each node—or an overlay network. This is so that each pod can get a unique IP address in the cluster. This can be achieved by deploying Kubernetes on a cluster with Flannel, OpenVSwitch, Calico, OpenContrail, or Weave. For more details about Kubernetes networking, see the docs.

Let’s take a more detailed look at all this.

Read More
12 Apr 2016 in Series: Fleet on CoreOS, Fleet, CoreOS

Fleet on CoreOS, Part Two

In my previous post, we learnt how fleet automatically reshuffles services across your cluster whenever a node fails to keep your app available. To be more specific, the code running on the failed node is automatically moved to one of the other healthy nodes in the cluster, and from the outside, your app continues to run smoothly.

If you’re interested in understanding more about how fleet fits into CoreOS, we go into that in a previous post about self-sufficient containers.

In this post, I explain that commands you can use to interact with fleet. This will lay the foundation for more advanced uses of fleet in subsiquent posts. But, before diving into commands, let's revisit unit files. This is important because most of the fleet commands are about handling unit files.

Read More
7 Apr 2016 in Community Meeting, Deis Workflow, Deis LTS

April 2016 Community Meeting

We wrapped up another month of work and held our April 2016 community meeting. With the beta release for Workflow out the door and LTS support hot on its heels, March was busy!

We always like to see the smiling faces of our community members but if you couldn't make it in person, we've embedded the recording below.

Read More
6 Apr 2016 in Deis v1 PaaS

Deis v1.13.0 LTS - Long Term Support

Deis v1.13.0 is the final feature release for Deis. It is a Long Term Support (LTS) release, which means we will continue to patch bugs and accept pull requests, however any new features should be applied to Deis Workflow, the successor to Deis.

Deis v1.13.0 bumps CoreOS to 899.13.0, updates the system's containers to Alpine 3.3, removes the scheduler technology previews, bumps the Heroku buildpacks to the latest stable versions, adds a healthcheck url /healthz to the controller, and allows logspout to re-discover the logger when it jumps to another control plane node.

Read More
29 Mar 2016 in Series: Docker Overview, Docker

Docker Overview, Part Two

In part one of this miniseries looking at Docker, we looked at what makes Docker special, the difference between Virtual Machines and containers, and the primary components that make up Docker.

In this post, we'll work directly with some containers. Specifically, we'll show you hot to launch a container, how to build an image with a Dockerfile, how to work with registries, and the basics of data volumes.

Launching Containers

Before launching a container you might pull it from the registry:

$ docker pull alpine

Launching a container is as simple as running:

$ docker run <image name> <command>

The command here is the command you want to run inside the container.

If the image doesn't exist, Docker attempts to fetch it from the public image registry. This happens automatically, but you should expect a time delay.

It's important to note that containers are designed to stop after the command executed within them has exited. For example, if you run /bin/echo hello world as your command, the container starts, prints "hello world" and then stops.

Read More