Deploying a Simple and Secure Docker Registry
There comes a time in everybody's life where they realize they have to run their own Docker Registry. Unfortunately there's not a lot of good information on how to run one. Docker's documentation is pretty good, but is verbose and spread across a lot of different pages. This means having half a dozen tabs open and searching for the right information.
It's common to run the Docker Registry with little to no security settings, and fronting it with NGINX or Apache to provide this security. But there is another way.
In this post, I will show how to run the Docker Registry securely by itself with both TLS certificate backed encryption and certificate based endpoint authorization.
If you need to do advanced stuff like authenticate against LDAP, you'll still want to go down the reverse proxy road.
For simplicity, I will will assume a single registry running on the local filesystem and will avoid using OS specific init systems by focusing just on the
docker commands themselves. This should work on any system capable of running Docker.
To begin, boot a server that has Docker installed.
For an OS with Docker already installed, I recommend CoreOS. However you could just as easily boot Ubuntu or CentOS and run
curl -sSL get.docker.com | sudo bash if you're into that sort of thing.
SSH into the server and ensure Docker is working by running:
To keep this as simple as possible, I will demonstrate using the paulczar/omgwtfssl image to create certificates. If you would rather create them manually with
openssl, see my blog post on Securing Docker with TLS.
Create a place on the filesystem to store the data for the registry as well as certificates and config data:
Now we can create the certificates:
Be sure to add any IP and DNS addresses that you might access your registry from, including that of any load balancer or floating IP you might have. You can do this by setting the
SSL_DNS as seen in the example above. Feel free to set additional ones with more
The next step is to create the
/opt/registry/config/registry.env file which will contain a list of environment variables to be passed into the container.
For this example I'm using the same CA certificate for clients as I did for the server. For production setups, it should probably be a different CA.
All that is left to do now is start the registry container, bind mount in the
/opt/registry directory, pass in the config file, and expose port
443 to the internal registry port.
Do that like so:
We can check we can access the registry container from the server itself by tagging and pushing the
alpine image to it, like so:
To check the security settings worked, to access the registry from a remote host:
Anywhere you see
172.17.8.101 you will want to replace it with the IP or hostname of your Docker registry.
On the server we can see this failure in the logs:
There are two things causing this failure.
The first is that the remote server does not trust the client because it cannot provide the trusted CA certificate as specified in
The second reason for failure is that the client doesn't trust the sever CA.
If we didn't have
REGISTRY_HTTP_TLS_CLIENTCAS_0 set we could simply add
--insecure-registry 172.17.8.101 to
/etc/default/docker. However, since we do have this set, we'll need to take the
CA.pem file and save it as
/etc/docker/certs.d/172.17.8.101/ca.crt on the remote machine that should trust the registry server.
I did this like so:
You may need to do it differently based on how your server is set up for access.
Now we have established trust in both directions, we can try again:
Success! Everything is working.
We know have a Docker Registry that is secured with both encryption and authorization based on each client having a specific CA certificate.
This setup is ideal for providing secure access to a private registry for remote servers.
If you want to do this in a more automated fashion you can look at configuration management communities like Chef for examples.
This post originally appeared on Paul Czarkowski's blog.