One piece to docker that is interesting AMAZING is the Remote API that can be used to programatically interact with docker. I recently had a situation where I wanted to run many containers on a host with a single container managing the other containers through the API. But the problem I soon discovered is that at the moment when you turn networking on it is an all or nothing type of thing… you can’t turn networking off selectively on a container by container basis. You can disable IPv4 forwarding, but you can still reach the docker remote API on the machine if you can guess the IP address of it.
One solution I came up with for this is to use nginx to expose the unix socket for docker over HTTPS and utilize client-side ssl certificates to only allow trusted containers to have access. I liked this setup a lot so I thought I would share how it’s done. Disclaimer: assumes some knowledge of docker!Generate The SSL Certificates
We’ll use openssl to generate and self-sign the certs. Since this is for an internal service we’ll just sign it ourselves. We also remove the password from the keys so that we aren’t prompted for it each time we start nginx.
Another option may be to leave the passphrase in and provide it as an environment variable when running a docker container or through some other means as an extra layer of security.
We’ll move ca.crt, server.key and server.crt to /etc/nginx/certs.Setup Nginx
The nginx setup for this is pretty straightforward. We just listen for traffic on localhost on port 4242. We require client-side ssl certificate validation and reference the certificates we generated in the previous step. And most important of all, set up an upstream proxy to the docker unix socket. I simply overwrote what was already in /etc/nginx/sites-enabled/default.
One important piece to make this work is you should add the user nginx runs as to the docker group so that it can read from the socket. This could be www-data, nginx, or something else!Hack It Up!
With this setup and nginx restarted, let’s first run a curl command to make sure that this setup correctly. First we’ll make a call without the client cert to double check that we get denied access then a proper one.
For the first two we should get some run of the mill 400 http response codes before we get a proper JSON response from the final command! Woot!
But wait there’s more… let’s build a container that can call the service to launch other containers!
For this example we’ll simply build two containers: one that has the client certificate and key and one that doesn’t. The code for these examples are pretty straightforward and to save space I’ll leave the untrusted container out. You can view the untrusted container on github (although it is nothing exciting).
First, the node.js application that will connect and display information:
And the Dockerfile used to build the container. Notice we add the client.crt and client.key as part of building it!
That’s about it. Run docker build . and docker run -n >IMAGE ID< and we should see a json dump to the console of the actively running containers. Doing the same in the untrusted directory should present us with some 400 error about not providing a client ssl certificate.
I’ve shared a project with all this code plus a vagrant file on github for your own prusual. Enjoy!
I’ve been hacking a lot on docker at Zapier lately and one of the things I found to be somewhat cumbersome with docker containers is that it seemed to be a little difficult to customize published containers without extending them and modifying files within them or some other mechanism. What I have come to discover is that you can publish containers that are customizable without modification from the end user by utilizing one of the most important concepts from 12 factor application development to Store Configuration in the Environment.
Let’s use a really good example of this, the docker-registry application used to host docker images internally. When docker first came out I whipped up a puppet manifest to configure this bad boy but then realized that the right way would be to run this as a container (which was published). Unfortunately the Dockerfile as it was didn’t fit my needs.
The gunicorn setup was hardcoded and to make matters more complicated the configuration defaulted strictly to the development based configuration that stored images in /tmp vs. the recommended production setting that stored images in S3 (where I wanted them).
The solution was easy, create a couple bash script that utilized environment variables that could be set when calling `docker run`.
First we generate the configuration file:
And wrap the gunicorn run call:
Finally the Dockerfile is modified to call these scripts with CMD, meaning that they are called when the container starts.
Since we use puppet-docker, the manifest for our dockerregistry server role simply sets these environment variables when it runs the container to configure it to our liking.
I’m really a big fan of this concept. This means people can publish docker containers that can be used as standalone application appliances with users tweaking to their liking via environment variables.
EDIT: Although I used puppet in this example to run docker, you don’t need to. You can easily do the following as well.