19 Feb 2016 in Kubernetes, Google Cloud Platform, Google Compute Engine

Internal Load Balancing on Google Container Engine

Internal load balancing is important for many infrastructures. But, if you've tried to do it for Google Container Engine, you'll know there's no prepackaged solution. Well, fear not. I've written a tool to help you out. So keep reading.

To quote the Google Cloud Compute Engine (GCE) docs:

An internal load balancer distributes network traffic on a private network that is not exposed to the Internet. Internal load balancing is useful not only for intranet applications where all traffic remains on a private network, but also for complex web applications where a frontend sends requests to backend servers via a private network.

Read More
12 Feb 2016 in Series: CoreOS Overview, CoreOS, Overview

CoreOS Overview, Part Three

This post is available in: Chinese`

This is the third and final post in a series looking at CoreOS.

In my last post, we looked at the cloud-config file, running etcd in proxy mode, and some common etcd cluster setups.

In this post, we take a closer look at systemd, unit files, Fleet, and fleetctl.

systemd Overview

systemd is an init system used by CoreOS that provides many powerful features for starting, stopping, monitoring, and restarting process. On CoreOS, systemd is used to manage the lifecycle of your Docker containers and also for different system bootstrap tasks.

Learning systemd would need a series of blog posts in itself. Here we only cover systemd to the extent that we need to run systemd units for Docker containers on CoreOS.

For more information about systemd, see the documentation.

Read More
5 Feb 2016 in Series: CoreOS Overview, CoreOS, Overview

CoreOS Overview, Part Two

This post is available in: Chinese`

This is the second post in a series looking at CoreOS.

In my last post, we looked at how CoreOS is different from other Linux systems, atomic upgrades and release channels, and the basics of cluster discovery.

In this post, we take a closer look at cloud-config and etcd. We'll also look at a few common cluster architectures.

Cloud-Config

Cloud-config allows you to declaratively customize various OS-level items, such as network configuration, user accounts, and systemd units (which we'll cover the next post). This came from Ubuntu and was modified a bit to fit the needs of CoreOS.

At the core of every CoreOS cluster machine, there is the bootstrap mechanic coreos-cloudinit. The coreos-cloudinit program the cloud-config file when it configures the OS after startup or during runtime.

Read More
3 Feb 2016 in etcd, CoreOS, Overview

etcd on CoreOS

In my last post, we learnt about CoreOS installation on AWS EC2 instances. Specifically, we saw how to create a CoreOS cluster with three nodes and connect to nodes from terminal window.

One of the main building blocks of CoreOS is etcd, a distributed key-value store. When applications run on a cluster, accessing application config data in a consistent manner is a problem, since using the underlying system’s file system is not feasible.

If you want to build a failsafe cluster, application config needs to be available across the cluster. etcd solves this problem. Think of it as a file system available across all the cluster nodes. See my previous post that discusses etcd in more detail.

In this post, we’re going to look at how to use etc in real-life scenarios.

Read More
29 Jan 2016 in Series: CoreOS Overview, CoreOS, Overview

CoreOS Overview, Part One

This post is available in: Chinese

CoreOS is an important part of many container stacks. In this series of posts, we’re going to take a look at CoreOS, why it’s important, and how it works. If you don’t know anything about CoreOS already, don’t worry. We start at the beginning.

The Basics and How CoreOS Is Different From Other Linux Systems

CoreOS is designed for security, consistency, and reliability.

  • Automatic CoreOS updates are done using an active/passive dual-partition scheme to update CoreOS as a single unit, instead of using a package-by-package method. We go over this in detail later.

  • Instead of installing packages via yum or APT, CoreOS uses Linux containers to manage your services at a higher level of abstraction. A single service's code and all dependencies are packaged within a container that can be run on a single CoreOS machine or many CoreOS machines in a cluster.

  • Linux containers provide similar benefits to complete virtual machines, but are focused on applications instead of entire virtualized hosts. Because containers don’t run their own Linux kernel or require a hypervisor, they have almost no performance overhead. The lack of overhead allows you to gain density which means fewer machines to operate and a lower compute spend.

CoreOS runs on almost any platform, including Vagrant, Amazon EC2, QEMU/KVM, VMware, OpenStack, and your own bare-metal hardware.

Read More