Tag: Docker
12 Sep 2016 in Docker, DAB, Kubernetes, Kompose

Push a Docker DAB to a Kubernetes Cluster in Two Commands!

Docker Distributed Application Bundles (DABs) are "an experimental open file format for bundling up all the artifacts required to ship and deploy multi-container apps." DABs contain a complete description of all the services required to run an application, along with details about which images to use, ports to expose, and networks used to link services.

A little over a month ago, I wrote about DABs and outlined how they can be used with multi-tier apps to develop locally with Docker Compose and then bundled for deployment to a Docker Swarm cluster.

In this post, I will expand on my previous post and show how DABs can be used to make an artifact that is deployable on a Kubernetes cluster.

Why? Because by doing this we can take advantage of the awesome develop experience that the Docker tools provide to deploy artifacts to a production-ready Kubernetes cluster without needing a whole bunch of Kubernetes experience. Win win.

Note: DAB files are still experimental as of Docker 1.12.

Read More
1 Aug 2016 in Docker, Mac

Docker for Mac

Developers love Docker. You can see this from the amount of attention Docker has got the last couple of years. But, one of the biggest issues developers faced was the non-availability of Docker on platforms other than Linux.

There were options like Boot2Docker (which we previously covered) that made working with Docker possible on a non-Linux machine, but the experience was sub-optimal.

Now with the general availability of Docker for Mac and Windows, developers no longer need to have a Linux box to experience Docker in it's full glory.

Docker for Mac is a native Mac application, built from scratch. With a native user interface and auto-update capability, it is deeply integrated with OS X native virtualization, Hypervisor Framework, networking, and file system. This makes Docker for Mac faster and more reliable than previous ways of getting Docker on a Mac.

In this post, we'll take a look at Docker for Mac and see how to get up and running with the stable release.

For some intros to Docker, check part one and part two of our Docker overview. You can also check this post on how to create and share your first Docker image.

Read More
22 Jul 2016 in Docker, storage, file system, drivers, volumes

Docker Storage: An Introduction

There are lots of places inside Docker (both at the engine level and container level) that use or work with storage.

In this post, I'll take a broad look at a few of them, including: image storage, the copy-on-write mechanism, union file systems, storage drivers, and volumes.

You'll need Docker installed locally on your machine if you want to try out some of the commands in this post. Check out the official docs for how to install Docker on Linux, or our previous post showing how to install Docker on a non-Linux machine.

Let's dive in.

Read More
15 Jul 2016 in Docker, Tutorial

Deploying a Simple and Secure Docker Registry

There comes a time in everybody's life where they realize they have to run their own Docker Registry. Unfortunately there's not a lot of good information on how to run one. Docker's documentation is pretty good, but is verbose and spread across a lot of different pages. This means having half a dozen tabs open and searching for the right information.

It's common to run the Docker Registry with little to no security settings, and fronting it with NGINX or Apache to provide this security. But there is another way.

In this post, I will show how to run the Docker Registry securely by itself with both TLS certificate backed encryption and certificate based endpoint authorization.

If you need to do advanced stuff like authenticate against LDAP, you'll still want to go down the reverse proxy road.

For simplicity, I will will assume a single registry running on the local filesystem and will avoid using OS specific init systems by focusing just on the docker commands themselves. This should work on any system capable of running Docker.

Read More
14 Jul 2016 in Docker, Security

Securing Docker With TLS Certificates

By default, Docker has no authentication or authorization for its API, instead relying on the filesystem security of its UNIX socket, /var/run/docker.sock, which by default is only accessible by the root user.

This is fine for the basic use case of only accessing the Docker API on the local machine via the socket as the root user. However if you wish to use the Docker API over TCP, you'll want to secure it so you don't have to give out root access to anyone that happens to poke you on the TCP port.

Docker supports using TLS certificates (both on the server and the client) to provide proof of identity. When set up correctly it will only allow clients and servers with a certificate signed by a specific CA to talk to eachother.

While not providing fine grained access permissions, it does at least allow us to listen on a TCP socket and restrict access with the bonus of also providing encryption.

In this post, I will detail what is required to secure Docker running on a CoreOS server. I will assume you already have a CoreOS server set up and running. If not, check out this previous Deis blog post covering CoreOS and VirtualBox.

Read More
29 Mar 2016 in Series: Docker Overview, Docker

Docker Overview, Part Two

In part one of this miniseries looking at Docker, we looked at what makes Docker special, the difference between Virtual Machines and containers, and the primary components that make up Docker.

In this post, we'll work directly with some containers. Specifically, we'll show you hot to launch a container, how to build an image with a Dockerfile, how to work with registries, and the basics of data volumes.

Launching Containers

Before launching a container you might pull it from the registry:

$ docker pull alpine

Launching a container is as simple as running:

$ docker run <image name> <command>

The command here is the command you want to run inside the container.

If the image doesn't exist, Docker attempts to fetch it from the public image registry. This happens automatically, but you should expect a time delay.

It's important to note that containers are designed to stop after the command executed within them has exited. For example, if you run /bin/echo hello world as your command, the container starts, prints "hello world" and then stops.

Read More
18 Mar 2016 in Series: Docker Overview, Docker

Docker Overview, Part One

In my last miniseries, we gave an overview of CoreOS. In this miniseries, I want to take a look at Docker. In this post we’ll look at why it’s special and how it’s built. We’ll start at the beginning, so don’t worry if you’re totally new to this.

Docker is an open source project that makes easier packaging of applications inside containers by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux.

Containers themselves are just an abstraction over Linux cgroups, which are a lower level kernel construct for jailing and limiting the resources of a process and its children.

Docker was using LinuX Containers (LXC) in the beginning, but then switched to runC, formerly known as libcontainer. runC runs in the same operating system as its host. This allows it to share a lot of the host operating system resources, such as RAM, CPU, networking, and so on.

Read More
4 Mar 2016 in Series: Connecting Containers, Docker, Overview

Connecting Docker Containers, Part Two

This post is part two of a miniseries looking at how to connect Docker containers.

In part one, we looked at the bridge network driver that allows us to connect containers that all live on the same Docker host. Specifically, we looked at three basic, older uses of this network driver: port exposure, port binding, and linking.

In this post, we’ll look at a more advanced, and up-to-date use of the bridge network driver.

We’ll also look at using the overlay network driver for connecting Docker containers across multiple hosts.

User-Defined Networks

Docker 1.9.0 was released in early November 2015 and shipped with some exciting new networking features. With these changes, now, for two containers to communicate, all that is required is to place them in the same network or sub-network.

Let’s demonstrate that.

Read More
26 Feb 2016 in Series: Connecting Containers, Docker, Overview

Connecting Docker Containers, Part One

Docker containers are self-contained, isolated environments. However, they’re often only useful if they can talk to each other.

There are many ways to connect containers. And we won’t attempt to cover them all. But in this miniseries, we will look at some common ways.

This topic seems elementary, but grasping these techniques and the underlying design concepts is important for working with Docker.

Understanding this topic will:

  • Help developers and ops people explore the broad spectrum of container deployment choices
  • Let developers and ops people to embark more confidently with a microservice design architecture
  • Empower developers and ops people to better orchestrate more complex distributed applications

Fortunately, the large number of connection options for containers enables a broad range of approaches, giving us the flexibility to choose an architecture that suits the needs of any application.

In this post, we'll look at three of the older, more basic ways of connecting Docker containers. Using this knowledge and experience as a foundation, we'll then move on to two newer, easier, more powerful ways in the next post.

Read More
10 Dec 2015 in Research, Analysis, Docker, Containers

The State of Containers and the Future of the Docker Ecosystem

Containers (and in particular, Docker) are getting ever more popular.

A recent report by O’Reilly Media and Ruxit presents interesting findings on the adoption and use patterns of containers and Docker.

For instance: the deployment of containers in production is likely to increase significantly in the short term. The report also highlights that one of the major barriers preventing production adoption has to do with the need for better operations tools. This sort of information may be crucial in guiding decision making on investment and innovation priorities.

This post considers some key aspects of the report. I first present the approach used for the research, then highlight the main findings. I conclude with a quick comparison to similar research reports published during the course of this year.

Read More
1 Dec 2015 in Series: Schedulers, Docker, Fleet, Swarm, Overview

Schedulers, Part 1: Basic Monolithic

Scheduling is a method to to assign workloads to resources that can handle those workloads. In a distributed environments, there is a particularly important need for schedulers. Especially ones that provide scalability, are resource aware, and cost effective.

Monolithic schedulers are a single process entity that make scheduling decisions and deploy jobs to be scheduled. These jobs could be a long running server, a short living batch command, a MapReduce query, and so on.

For monolithic scheduler to make decision for scheduling a job, it should observe the resources available in the cluster (e.g. CPU, memory, and so on), lock the resources, schedule the job, and update the available resources.

It’s hard for Monolithic schedulers to deal with more than one job at a time because there is a single resource manager entity a single scheduling entity. Avoiding concurrency makes the system easier to design and easier to understand.

Some examples of monolithic schedulers:

  • fleet: a native scheduler to CoreOS (not resource aware)
  • Swarm: a scheduling backend for Docker containers
  • Kubernetes: an advanced type of monolithic scheduler for Pods (a collection of co-located containers that share same namespaces)

In this post, we'll cover two basic monolithic schedulers.

First we'll look fleet, a scheduler for systemd units. systemd is an advanced init system that that configures and boots Linux userspace. A unit is an individual systemd configuration file that describes a process you’d like to run. We’ll also look at Swarm, a scheduling backend for Docker containers.

Read More
19 Nov 2015 in Docker, Docker Hub

Six Ready-made NoSQL Database Docker Images

NoSQL is an umbrella term for a whole category of databases, many different in their featuresets, but united in that they depart in some way from the relational model used by databases such as MySQL and PostgreSQL.

In this post, we cover six ready-made NoSQL database Docker images.

For each image, we address some of the current bugs, and offer potential workarounds.

Rethink DB

RethinkDB is an OSS project built to store JSON documents and has been designed with horizontal scalability in mind. RethinkDB supports functions, table joins, aggregations, and groupings to bolster its native query language, ReQL.

ReQL differs from other query languages in that, rather than constructing strings for a query engine, developers work with ReQL via chainable methods directly from their programming language of choice. As well as being a easy to learn, this also mitigates against the posibility of injection attacks.

Read More
5 Nov 2015 in Series: Ready Made, Docker, Database

4 Ready-made MySQL Database Docker Images

MySQL is a widely used Relational Database Management System (RDBMS) across organisations large and small. Companies using MySQL for their database needs include Facebook, YouTube, and Booking.com.<!--more-->

In this post, we take a look at four ready-made MySQL related Docker images. For each one, we’ll address some of the current issues that may affect you when using the image, and offer ways to work around them.

Read More
29 Oct 2015 in Series: Ready Made, Docker Hub, Docker

5 Ready-made OSS Docker Images

Traditionally, Open Source Software (OSS) software has had a reputation for being hard to install and brittle to maintain.<!--more-->

And rightly so. That ./configure && make && make install command very rarely went as a smoothly as you’d like it to. Nevermind the hours you’d pour over "the documentation" to figure out how to get a working configuration for your system.

Many medium to large OSS projects recognise this problem and work hard to get into Linux distributions like Ubuntu or RedHat. (Alternatively providing their own packages to be downloaded from the project specific package repository.) Once that’s done, installation is as easy as apt-get or an rpm run.

But there is an alternative to this that works well for people who prefer to work with containers over manual system administration. Increasingly, OSS projects are providing ready-to-use images that make it easy to get up-and-running. Just take a look at the Docker Hub and see for yourself how many images are available for installation.

This post highlights five useful OSS Docker images, look at their default configuration, and suggest ways to modify them to make them work for you.

Read More
22 Oct 2015 in Docker, Containers

Going Beyond Hello World Containers is Hard Stuff

In my previous post, I provided the basic concepts behind Linux container technology. I wrote as much for you as I did for me. Containers are new to me. And I figured having the opportunity to blog about the subject would provide the motivation to really learn the stuff.

I intend to learn by doing. First get the concepts down, then get hands-on and write about it as I go. I assumed there must be a lot of Hello World type stuff out there to give me up to speed with the basics. Then, I could take things a bit further and build a microservice container or something.

I mean, it can’t be that hard, right?


Maybe it’s easy for someone who spends significant amount of their life immersed in operations work. But for me, getting started with this stuff turned out to be hard to the point of posting my frustrations to Facebook...

But, there is good news: I got it to work! And it’s always nice being able to make lemonade from lemons. So I am going to share the story of how I made my first microservice container with you. Maybe my pain will save you some time.

If you've ever found yourself in a situation like this, fear not: folks like me are here to deal with the problems so you don't have to!

Let’s begin.

Read More
8 Oct 2015 in Tutorial, Docker

Create and Share Your First Docker Image

In the previous post, we looked at Dockerfile instructions, Dockerfile syntax, Docker images, and Docker containers. Let’s put of all of this Docker knowledge we have into action, i.e. take a real life scenario, and use Docker to simplify it.

Imagine you’re developing an awesome new application and need a Redis service (more on Redis later) to handle its data. Now, you can install Redis locally if you’re the only one developing the application. But in a distributed environment, you want everyone to use the same set of similarly configured services so there are no surprises during deployment. A simple solution is to create a Redis Docker image and distribute it across the teams.

We’ll start by creating the Dockerfile. Then we’ll create the Docker image from the Dockerfile. We’ll run it as a containerized service. Then finally, we’ll learn how to use the Docker hub to share your Docker images.

Read More
1 Oct 2015 in Series: Ready Made, Containers, Docker, Round-up

Five More Popular Java-based Docker Images

In my last post, I looked at a few Java-based Docker images that can streamline your container workflow. I highlighted some of the issues you can run into when working with Apache Hadoop, Apache Tomcat, Apache Maven, Stash, and Glassfish in containers. I also provided a few workarounds.

In this post, we’ll review five more Java-based Docker repositories, along with some of the current bugs that you might run into when working with them.

Read More
17 Sep 2015 in Docker, Overview

Dockerfile Instructions and Syntax

In my previous post we learnt about Docker basics and its installation process on a non-Linux system. We also learnt about Docker architecture and the terminology used while dealing with Docker. But what next?

All this knowledge is no good if can’t solve a real life problem.

We all know that Docker simplifies application deployment via containerization. But how does that happen, and how can you use Docker to deploy your own application quickly?

To understand this, we need to understand the image creation process and the steps to spawn a container from an image.

So, let’s start with the process of creating Docker images.

Read More
15 Sep 2015 in Docker, Perspective

Why I'm Excited About Docker

Docker is already being used in production by big companies, has received a tremendous amount of publicity, and continues to spur excitement in the tech industry. For a project that is only two years old, this success is unprecedented.

In the first post of this miniseries, I’d like to share the reasons behind my excitement about Docker and explain why I think Docker has made my work as a developer easier. In the second post in this miniseries, I will summarise what I learnt from the interesting announcements and demos at DockerCon, which took place in San Francisco in June.

Read More
2 Sep 2015 in Docker, Containers

Get Started With Docker on Your Non-Linux PC

Though having been around for quite some time now, containers have recently become one of the most sought after technologies. Docker made containerization cool. Seemingly everyone is running—or wants to run—their software in a Docker container. And rightly so. After all, containers are lightweight, easy to deploy, and scalable.

But what if you’re late to the party? If you’re just getting started with Docker, you probably have a thousand questions to ask. Perhaps one of them is: "do I need a Linux box to run Docker? If not, how will that work?"

In this post we'll take a look at that question and get you started on the basics. We’ll learn how to install Docker, create containers from images, and run containers on a non-Linux PC.

Read More
2 Jul 2015 in Docker

Sailing Past Dependency Hell With Docker

Have you ever been excited to tinker with a software project, only to have dependency hell ruin all of the fun? As a software consultant, I face this situation all the time.

Luckily however, technologies like Docker take the pain out of spinning up additional components of your app architecture, like databases or job queues, or installing runtimes like Ruby or Python. By isolating processes from one another with very little overhead, you can sail past dependency hell.

Over the last year, I’ve worked on a number of different projects, encountered lots of of different requirements, and explored all sorts of ways of working through dependency conflicts. This is the story of my search for a solution, the pros and cons of each path I explored, and how I ended up using Docker.

Life as a Consultant, Day 1

When I started working as a consultant, I looked down at my beautiful new laptop and made a promise to treat it right. I wouldn't install a bunch of system extensions. I wouldn't tweak too many esoteric settings. I wouldn't bog it down with databases and other services.

Then came my first client project. Along with it came Python 2.7.6, MySQL 5.1.73, Ruby 1.9.3-p545, Node 0.10.29, Elasticsearch something-dot-something. And as I looked through the 500-line README, my heart sank.

My first reaction was to try to sidestep the dependencies. Occasionally, you can get away without a dependency. Usually, you're just delaying it for a while. Even for an optional dependency, skipping it meant a delicate dance of stubbed-out methods and dark hallways that must not be entered. Before long, this delicate dance turned into a clumsy stumble.

Faced with the reality that ignoring these dependencies was not a long term solution, I started to wonder why this project had so many dependencies in the first place. Certainly it had too many dependencies!

I may have been right about this one, but it's not a solution. Over time, I may be able to steer the project away from some of its dependencies, but that wouldn't do me any good at the moment.

I was ready to bite the bullet, beg forgiveness from my new laptop, and install all of this stuff. But I wasn’t quite ready to give up without a fight. If I took really detailed notes, I thought, maybe I could just uninstall all of this stuff later. Welcome to the world of Homebrew post-install notes.txt and List of rbenv files.txt. Now, in addition to a 500-line README file, I had a 600-line UNINSTALL file.

I ended up with a typical developer setup: the base OS, plus all of the services needed to run my application. Or, in a picture, this:

MySQL, etc. Over OSX

Read More