Tag: Overview
3 May 2016 in Kubernetes, Overview, Series: Kubernetes Overview

Kubernetes Overview, Part Two

In my previous post we looked at kubectl, clusters, the control plane, namespaces, pods, services, replication controllers, and labels.

In this post we take a look at at volumes, secrets, rolling updates, and Helm.


A volume is a directory, possibly with some data in it, accessible to a container as part of its filesystem. Volumes are used used, for example, to store stateful app data.

Kubernetes supports several types of volume:

  • emptyDir
  • hostPath
  • gcePersistentDisk
  • awsElasticBlockStore
  • nfs
  • iscsi
  • glusterfs
  • rbd
  • gitRepo
  • secret
  • persistentVolumeClaim
Read More
27 Apr 2016 in Kubernetes, Overview, Series: Kubernetes Overview

Kubernetes Overview, Part One

Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster.

Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure the state of the cluster continually matches the user's intentions.

Kubernetes enables you to respond quickly to customer demand by scaling or rolling out new features. It also allows you to make maximal use of your hardware.

Kubernetes is:

  • Lean: lightweight, simple, accessible
  • Portable: public, private, hybrid, multi cloud
  • Extensible: modular, pluggable, hookable, composable, toolable
  • Self-healing: auto-placement, auto-restart, auto-replication

Kubernetes builds on a decade and a half of experience at Google running production workloads at scale, combined with best-of-breed ideas and practices from the community.

Kubernetes supports Docker and rkt containers, with more container types to be supported in a future.

In this miniseries, we’ll cover Kubernetes from the ground up.

Let’s start with the basic components.

Read More
4 Mar 2016 in Series: Connecting Containers, Docker, Overview

Connecting Docker Containers, Part Two

This post is part two of a miniseries looking at how to connect Docker containers.

In part one, we looked at the bridge network driver that allows us to connect containers that all live on the same Docker host. Specifically, we looked at three basic, older uses of this network driver: port exposure, port binding, and linking.

In this post, we’ll look at a more advanced, and up-to-date use of the bridge network driver.

We’ll also look at using the overlay network driver for connecting Docker containers across multiple hosts.

User-Defined Networks

Docker 1.9.0 was released in early November 2015 and shipped with some exciting new networking features. With these changes, now, for two containers to communicate, all that is required is to place them in the same network or sub-network.

Let’s demonstrate that.

Read More
26 Feb 2016 in Series: Connecting Containers, Docker, Overview

Connecting Docker Containers, Part One

Docker containers are self-contained, isolated environments. However, they’re often only useful if they can talk to each other.

There are many ways to connect containers. And we won’t attempt to cover them all. But in this miniseries, we will look at some common ways.

This topic seems elementary, but grasping these techniques and the underlying design concepts is important for working with Docker.

Understanding this topic will:

  • Help developers and ops people explore the broad spectrum of container deployment choices
  • Let developers and ops people to embark more confidently with a microservice design architecture
  • Empower developers and ops people to better orchestrate more complex distributed applications

Fortunately, the large number of connection options for containers enables a broad range of approaches, giving us the flexibility to choose an architecture that suits the needs of any application.

In this post, we'll look at three of the older, more basic ways of connecting Docker containers. Using this knowledge and experience as a foundation, we'll then move on to two newer, easier, more powerful ways in the next post.

Read More
12 Feb 2016 in Series: CoreOS Overview, CoreOS, Overview

CoreOS Overview, Part Three

This post is available in: Chinese`

This is the third and final post in a series looking at CoreOS.

In my last post, we looked at the cloud-config file, running etcd in proxy mode, and some common etcd cluster setups.

In this post, we take a closer look at systemd, unit files, Fleet, and fleetctl.

systemd Overview

systemd is an init system used by CoreOS that provides many powerful features for starting, stopping, monitoring, and restarting process. On CoreOS, systemd is used to manage the lifecycle of your Docker containers and also for different system bootstrap tasks.

Learning systemd would need a series of blog posts in itself. Here we only cover systemd to the extent that we need to run systemd units for Docker containers on CoreOS.

For more information about systemd, see the documentation.

Read More
5 Feb 2016 in Series: CoreOS Overview, CoreOS, Overview

CoreOS Overview, Part Two

This post is available in: Chinese`

This is the second post in a series looking at CoreOS.

In my last post, we looked at how CoreOS is different from other Linux systems, atomic upgrades and release channels, and the basics of cluster discovery.

In this post, we take a closer look at cloud-config and etcd. We'll also look at a few common cluster architectures.


Cloud-config allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units (which we'll cover the next post). This came from Ubuntu and was modified a bit to fit the needs of CoreOS.

At the core of every CoreOS cluster machine, there is the bootstrap mechanic coreos-cloudinit. The coreos-cloudinit program the cloud-config file when it configures the OS after startup or during runtime.

Read More
3 Feb 2016 in etcd, CoreOS, Overview

etcd on CoreOS

In my last post, we learnt about CoreOS installation on AWS EC2 instances. Specifically, we saw how to create a CoreOS cluster with three nodes and connect to nodes from terminal window.

One of the main building blocks of CoreOS is etcd, a distributed key-value store. When applications run on a cluster, accessing application config data in a consistent manner is a problem, since using the underlying system’s file system is not feasible.

If you want to build a failsafe cluster, application config needs to be available across the cluster. etcd solves this problem. Think of it as a file system available across all the cluster nodes. See my previous post that discusses etcd in more detail.

In this post, we’re going to look at how to use etc in real-life scenarios.

Read More
29 Jan 2016 in Series: CoreOS Overview, CoreOS, Overview

CoreOS Overview, Part One

This post is available in: Chinese

CoreOS is an important part of many container stacks. In this series of posts, we’re going to take a look at CoreOS, why it’s important, and how it works. If you don’t know anything about CoreOS already, don’t worry. We start at the beginning.

The Basics and How CoreOS Is Different From Other Linux Systems

CoreOS is designed for security, consistency, and reliability.

  • Automatic CoreOS updates are done using an active/passive dual-partition scheme to update CoreOS as a single unit, instead of using a package-by-package method. We go over this in detail later.

  • Instead of installing packages via yum or APT, CoreOS uses Linux containers to manage your services at a higher level of abstraction. A single service's code and all dependencies are packaged within a container that can be run on a single CoreOS machine or many CoreOS machines in a cluster.

  • Linux containers provide similar benefits to complete virtual machines, but are focused on applications instead of entire virtualized hosts. Because containers don’t run their own Linux kernel or require a hypervisor, they have almost no performance overhead. The lack of overhead allows you to gain density which means fewer machines to operate and a lower compute spend.

CoreOS runs on almost any platform, including Vagrant, Amazon EC2, QEMU/KVM, VMware, OpenStack, and your own bare-metal hardware.

Read More
1 Dec 2015 in Series: Schedulers, Docker, Fleet, Swarm, Overview

Schedulers, Part 1: Basic Monolithic

Scheduling is a method to to assign workloads to resources that can handle those workloads. In a distributed environments, there is a particularly important need for schedulers. Especially ones that provide scalability, are resource aware, and cost effective.

Monolithic schedulers are a single process entity that make scheduling decisions and deploy jobs to be scheduled. These jobs could be a long running server, a short living batch command, a MapReduce query, and so on.

For monolithic scheduler to make decision for scheduling a job, it should observe the resources available in the cluster (e.g. CPU, memory, and so on), lock the resources, schedule the job, and update the available resources.

It’s hard for Monolithic schedulers to deal with more than one job at a time because there is a single resource manager entity a single scheduling entity. Avoiding concurrency makes the system easier to design and easier to understand.

Some examples of monolithic schedulers:

  • fleet: a native scheduler to CoreOS (not resource aware)
  • Swarm: a scheduling backend for Docker containers
  • Kubernetes: an advanced type of monolithic scheduler for Pods (a collection of co-located containers that share same namespaces)

In this post, we'll cover two basic monolithic schedulers.

First we'll look fleet, a scheduler for systemd units. systemd is an advanced init system that that configures and boots Linux userspace. A unit is an individual systemd configuration file that describes a process you’d like to run. We’ll also look at Swarm, a scheduling backend for Docker containers.

Read More
24 Sep 2015 in Overview, Perspective

A Developer’s Journey into Linux Containers

Frustrated Developer

I’ll let you in on a secret: all that DevOps cloud stuff that goes into getting my applications into the world is still a bit of a mystery to me. But, over time I’ve come to realize that understanding the ins and outs of large scale machine provisioning and application deployment is important knowledge for a developer to have. It’s akin to being a professional musician. Of course you need know how to play your instrument. But, if you don’t understand how a recording studio works or how you fit into a symphony orchestra, you’re going to have a hard time working in such environments.

In the world of software development getting your code into our very big world is just as important as making it. DevOps counts and it counts a lot.

So, in the spirit of bridging the gap between Dev and Ops I am going to present container technology to you from the ground up. Why containers? Because there is strong evidence to suggest that containers are the next step in machine abstraction: making a computer a place and no longer a thing. Understanding containers is a journey that we’ll take together.

In this article I am going to cover the concepts behind containerization. I am going to cover how a container differs from a virtual machine. I am going to go into the logic behind containers construction as well as how containers fit into application architecture. I’ll discussion how lightweight versions of the Linux operating system fits into the container ecosystem. I’ll discuss using images to create reusable containers. Lastly I’ll cover how clusters of containers allow your applications to scale quickly.

In later articles I’ll show you the step by step process to containerize a sample application and how to create a host cluster for your application’s containers. Also, I’ll show you how to use a Deis to deploy the sample application to a VM on your local system as well as a variety of cloud providers.

So let’s get started.

Read More
17 Sep 2015 in Docker, Overview

Dockerfile Instructions and Syntax

In my previous post we learnt about Docker basics and its installation process on a non-Linux system. We also learnt about Docker architecture and the terminology used while dealing with Docker. But what next?

All this knowledge is no good if can’t solve a real life problem.

We all know that Docker simplifies application deployment via containerization. But how does that happen, and how can you use Docker to deploy your own application quickly?

To understand this, we need to understand the image creation process and the steps to spawn a container from an image.

So, let’s start with the process of creating Docker images.

Read More
10 Jun 2015 in Containers, Overview

Isolation with Linux Containers

Note:This is part two of a two part series that starts here.

In part one of this series, we built a simple echo server, and took steps to isolate the privileges, filesystem, allocated resources, and process space. The things we did isolated the echo server process from all the other processes on the host.

In this post, we’ll look at how Linux Containers provide an easier, more powerful alternative. Instead of isolating at the process level, we’ll isolate at the OS level.

Introducing Linux Containers

Docker is the hot new thing, but Linux containers (LXC) have been around since before Docker launched in March of 2013.

The Docker FAQ cites various differences between LXC and Docker. While Docker now utilizes libcontainer, it originally wrapped the LXC user tools. In summary, LXC provided a wrapper around Linux kernel technologies, while Docker essentially provided a wrapper around LXC.

This post look at the following technologies in the context of LXC:

  • Kernel namespaces
  • Chroots (using pivot_root)
  • uidmap and gidmap
  • cgroups
  • Virtual Ethernet
Read More
12 May 2015 in Containers, Overview

Linux Isolation Basics

Note: This is part one of a two part series. Read part two.

In the complex world of modern app deployment solutions, containers have been gaining traction as a popular distribution method. But what are they, and why are people so excited about them? This two part series will look into some of the benefits offered.

First, we’ll look at how isolation is generally used to solve a whole class of problems. Next we’ll look at how containers, specifically, makes isolation more manageable. An intermediate familiarity with UNIX-like systems is assumed throughout.

Read More