Thursday, July 30, 2015

Docker cheat sheet

Intended as a living document, just some basics for now.

Build from the Dockerfile in the current directory and tag the image as "demo":
docker build -t demo .
Build but don't cache the intermediate containers. This is useful if the previous build failed and the intermediate container is broken:
docker build --no-cache .
There's a good Dockerfile reference document and Dockerfile best practices document that you should read when writing a Dockerfile. Make sure you have a .dockerignore file that excludes all unnecessary stuff from the image to reduce bloat and reduce the amount of context that needs to be sent to the docker daemon on each rebuild.

Run bash inside the container to poke around inside it:
docker run -it ubuntu bash
Share a host directory with the container:
docker run -it -v /home/me/somedir:/mounted_inside ubuntu:latest bash
List local available images:
docker images
See what containers are running:
docker ps
Bash helper functions (credit to raiford). "dckr clean" is particularly useful when building an image that results in lots of orphans due to errors in the Dockerfile:
if which docker &> /dev/null; then
  function dckr {
    case $1 in
      clean)
        # Clean up orphaned images.
        docker rmi -f $(docker images -q -f dangling=true)
        ;;
      cleanall)
        # Delete All Docker images with prompt.
        read -r -p "Delete all docker images? [y/N] " response
        if [[ $response =~ ^([yY][eE][sS]|[yY])$ ]]; then
          docker rmi $(docker images -q)
        fi
        ;;
      killall)
        # Kill all running docker processes
        docker kill $(docker ps -q)
        ;;
      *)
        echo "Commands: clean, killall"
        ;;
    esac
  }
fi
To run a previously created container with bash, start it as normal and then use exec (this assumes your original container can actually run successfully):
docker start [container id]
docker exec -it [container id] /bin/bash
To authenticate to a GCR/AR repo use this documentation:
gcloud auth login
# You only need to do this once per host
gcloud auth configure-docker us-central1-docker.pkg.dev
You can then use gcrane to copy a container from one repo to another:
go/bin/gcrane cp ubuntu:jammy us-central1-docker.pkg.dev/project/myrepo/ubuntu:jammy

Docker vs. Vagrant

Docker and Vagrant are somewhat similar technologies, so what are their relative strengths and weaknesses and when should you choose one over the other? I'm fairly new to both, but here's a compare-and-contrast based on what I've learned so far.

The tl;dr is that Docker is really best for running applications in production and fast testing across linux flavors. Vagrant handles Windows and OS X in addition to Linux, and is good for automating building and packaging of software, and testing where you need a full OS stack running. If you need to build a .dmg or a .exe installer for your software, Vagrant is a good place to do that.

Feature Docker Vagrant
Guest OSLinux only (for now, see update below), based on linux containersMac (on Mac hardware), Windows, Linux, etc. Relies on VirtualBox or VMWare.
Host OS Mac/Win/Linux. Windows and OS X use boot2docker which is essentially a small linux virtualbox VM that runs the linux containers Mac/Win/Linux
Configuration Dockerfile describes steps to build the docker container, which holds everything needed to run the program. You can start from standard images downloaded from dockerhub such as Ubuntu, which is an extremely minimal ubuntu install, and you can upload your own images. Vagrantfile describes what OS you want, any host<->guest file shares, and any provisioning scripts that should be run when the vm ("box") is started. You can start from standard fairly-minimal boxes for many OSes downloaded from Atlas, and upload your own box.
Building When you build a docker image it creates a new container for each instruction in the Dockerfile and commits it to the image. Each step is cacheable so if you modify something in the Dockerfile, it only needs to re-execute from that point onward. Many people use Vagrant without building their own box, but you can build your own. It's essentially a case of getting the VM running how vagrant likes it (packer can help here), making your customizations, then exporting and uploading.
Running When you docker "run" something it creates a container from the image specified, and runs your application inside. All of the regular system stuff you might expect to be running (rsyslog, ntpd, sshd, cron) isn't. You have Ubuntu installed in there, but it isn't running Ubuntu. The "run" command allows you to specify shared folders with the host. You run the full-blown OS inside a VM with all the bells and whistles (rsyslog, sshd, etc.) you would expect. When you "vagrant up" it brings up the VM and runs any provisioning scripts you specified in the Vagrantfile. This is typically where you do all the installing necessary to get your build environment ready. The provision script is the heart of the reproducible build you're creating. Once the provision script is done you'll typically have a Makefile or other scripts that will SSH into the environment and do the actual software building.
SSH access If you're trying to ssh into your container, you're almost certainly doing it wrong. SSH is a core part of the technology and used heavily. vagrant invisibly manages selecting ports for forwarding ssh to each of your VMs
Startup time <1 second <1 minute
Suitable for Production and Dev/test Dev/test only

































Further reading

If you read this far you should also read this, where the authors of both tools explain the differences:
"Vagrant is a tool for managing virtual machines. Docker is a tool for building and deploying applications by packaging them into lightweight containers."

Update Aug 24, 2015

Microsoft has thrown their weight behind docker, and will be implementing shared-kernel containerization for Windows Server, so you will be able to run Windows Server containers on Windows Server as Mark explains here. You'll still need linux to run linux containers natively, and boot2docker will continue to be a way to bring linux containers to other OSes. Once container-capable windows servers are available from Microsoft/Amazon/Google clouds, much of the niche that Vagrant occupies will have been eroded, and windows server docker containers should be a better solution to the problem of being unable to share windows build VMs due to licensing.

Changing the IP address of the docker0 interface on Ubuntu

Docker picks an address range for the docker0 range that it thinks is unused. Sometimes it makes a bad choice. The docs tell you that this can be changed, however, it's not a particularly obvious process. Here's how you do it on Ubuntu.

First, stop the daemon:
sudo service docker stop
Edit /etc/default/docker and add a line like this:
DOCKER_OPTS="--bip=192.168.1.1/24"
Then bring the interface down:
sudo ip link set docker0 down
Then delete the bridge:
sudo brctl delbr docker0
Then restart the service:
sudo service docker start
If you don't delete the bridge docker will fail to start. But it won't write anything to the logs. If you run it interactively you'll see the problem:
$ sudo docker -d --bip=192.168.1.1/24
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) 
INFO[0000] [graphdriver] using prior storage driver "aufs" 
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1 
FATA[0000] Error starting daemon: Error initializing network controller: Error creating default "bridge" network: bridge IPv4 (10.0.42.1) does not match requested configuration 192.168.1.1