Charlie Drage

Easy Docker multi-host networking

The release of Docker 1.9 has brought native multi-host networking into the mix. This changes a lot in the container orchestration world. Ever try to install Kubernetes on a bare metal server without already-implemented private networking? It’s a pain in the ass.

1.9 also adds libnetwork, changing how Docker will communicate to neighbouring containers. Soon “links” or –link will be deprecated in favour of Docker’s internal networking.

Instead of:

docker run -d mysql
docker run --link:mysql:db -d nginx

It will eventually be:

docker network create --driver bridge isolated_nw
docker run --net=isolated_nw -d nginx
docker run --net=isolated_nw -d mysql

Making (imo) inter-container communication much easier.

Oh, and hostnames resolve too. –name=awesomeapp and it’ll resolve awesomeapp within the network.

Anyways! Back to multi-host networking and setting this shit up between two different hosts.

The nitty gritty of it is that Docker is using VXLAN in order to tunnel your connections between hosts. A Key-Value server is thrown into the mix to keep everything together. As of right now, either: Consul, Zookeeper or Etcd is supported. When specifying the –driver overlay option when creating the network, Docker tunnels these connections between hosts in order to communicate on the same overlay network.

Enough with the talk, let’s get to the examples!

There’s two ways to deploy an overlay multi-host network, either:

  1. You deploy using docker-machine.

  2. You manually deploy a Consul server and input into your Docker config:

--cluster-store=PROVIDER://URL
--cluster-advertise=HOST_IP

We’ll focus on using docker-machine for deployment, if you’d like to configure it manually visit the official docs.

Now let’s set it up!

Define the servers you’re using

Let’s assume you have root access to your server and SSH running.

export IP1=10.10.10.1 # our consul server
export IP2=10.10.10.2 # machine 1
export IP3=10.10.10.3 # machine 2

First, let’s get our Consul server up and running.

docker-machine create \
    -d generic \
    --generic-ip-address $IP1 \
    consul

docker $(docker-machine config consul) run -d \
    -p "8500:8500" \
    -h "consul" \
    progrium/consul -server -bootstrap

Our first multi-host server

docker-machine create \
    -d generic \
    --generic-ip-address $IP2 \
    --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-advertise=eth0:0" \
    machine1

Our second multi-host server

docker-machine create \
    -d generic \
    --generic-ip-address $IP3 \
    --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-advertise=eth0:0" \
    machine2

Use docker-machine env variables and setup the overlay network

docker $(docker-machine config machine1) network create -d overlay myapp
docker $(docker-machine config machine2) network ls

Use docker network ls on any host and you’ll see that the overlay network is now available as myapp. This network is available to all of those clustered to the same KV (Consul) server.

Now use containers and the overlay network!

docker $(docker-machine config machine1) run -d --name=web --net=myapp nginx
docker $(docker-machine config machine1) run -d --name=db --net=myapp mysql

That’s it :)

Comments