Docker Notes

Russell Bateman
September 2018
last update:

Quick Docker notes in preparation for beginning to make use of this technology.

The point of container technologies is to wrap up a software implementation in a complete filesystem and operating environment that contains everything it needs to run: the application itself, runtime, system tools and libraries—anything and everything you install on a server. This guarantees that the implementation will always run the same regardless of the environment it is running in. Um, well, in theory.

Docker is a tool that is designed to benefit both developers and system administrators. This makes it the essential quiver in DevOps' weaponry or toolstack. For developers especially it means not having to focus on writing code that must take into account the greater environment (operating system, tools and frameworks present, etc.). Another benefit is the many, prebuilt, open-source containers published that already do something a developer wants to do.

Elasticsearch, Logstash, Kibana (ELK) Docker image documentation comes to mind as an excellent example of this.

Really good video introducing Docker: Learn Docker in 12 Minutes:

 0:00 - What is Docker
 0:59 - Virtual Machines versus Docker
 1:57 - Introduction to Dockerfiles, images and containers
 3:57 - Docker Hub
 4:52 - Writing a Dockerfile
 6:36 - Building an image
 7:16 - Running a container
 8:25 - Mounting volumes
10:13 - One process per container / Container
11:10 - Recap

Docker links

Docker versions
Version Release date
18.09 11/08/2018
18.06.0-ce 07/18/2018
18.05.0-ce 05/09/2018
18.04.0-ce 04/10/2018
18.02.0-ce 02/07/2018
18.01.0-ce 01/10/2018
17.11.0-ce 11/20/2017
17.10.0-ce 10/17/2017
17.07.0-ce 08/29/2017
17.05.0-ce 05/04/2017
17.04.0-ce 04/03/2017
17.03.0-ce 03/01/2017
1.13.0 01/18/2017
1.12.0 07/28/2016
1.11.0 04/12/2016
1.10.0 02/04/2016
1.9.0 10/29/2015
1.8.0 08/11/2015
1.7.0 06/18/2015
1.6.0 04/16/2015

Glossary it occurs to me to add to this:

Term Definition
bind mount the mounting of a small piece of the host filesytem into a container; more limited than (mounting) a volume. This is the recommended method of sharing configuration files between the host machine and the container. (Configuration file content commonly varies between instantions of running containers.) For example, /etc/resolv.conf could be a useful bind mount in a container.

If your current working directory were ~/Downloads and you had filebeat.yml there, assuming the running executable in the Docker container was looking for this file, the following command line would provide (expose) it (and only this one file of the host's filesystem):

$ docker run --mount type=bind,source="$(pwd)"/filebeat.yml,\
    target=/usr/share/filebeat/filebeat.yml \
volume mount the mounting of an entire volume from the host filesysem into a container. This is the recommended method of sharing data between different containers when that is necessary. Volumes mounted are usually filesystems off path /var/lib/docker/volumes/. In the example below, (read-oinly) HTML content is set up for use by nginx running in a container:
$ docker run --volume nginx:/usr/share/nginx/html:ro nginx:latest

When you run docker inspect, you see (in the Mounts section):

"Mounts": [
    "Type": "volume",
    "Name": "nginx",
    "Source": "/var/lib/docker/volumes/nginx/_data",
    "Destination": "/usr/share/nginx/html",
    "Driver": "local",
    "Mode": "",
    "RW": false,
    "Propagation": ""
Docker Compose a tool for defining and running multi-container Docker applications. You use a YAML file, docker-compose.yml, to configure your application's services. With a single command, docker-compose up, you create and start all the services from your configuration.
Docker Swarm a clustering and scheduling tool for Docker containers. With Swarm, administrators and developers can establish and manage an entire cluster of Docker nodes as a single, virtual system.

Install Docker on Ubuntu 18.04.1 Server

Docker isn't available from the usual repositories.

# apt-get update
# apt-get install apt-transport-https ca-certificates curl software-properties-common
# curl -fsSL | apt-key add -
# add-apt-repository "deb [arch=amd64] bionic stable"
# apt-get update
# apt-cache policy docker-ce
# apt-get install docker-ce

This enabled the Docker dæmon to start on boot, but you can check its status:

# systemctl status docker

Add your username to Docker's group. This avoids having to resort to root in order to run any docker command:

# usermod -aG docker username
# su - username               # to set the group
$ id -nG username

Adopt user into group docker

When you try the Docker hello-world example, it fails:

[email protected]:~$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at
unix /var/run/docker.sock: connect: permission denied.

This is because:

[email protected]:~$ ll /var/run/docker.sock
srw-rw---- 1 root docker 0 Oct 10 08:48 /var/run/docker.sock

You can use sudo to run the example or you can do what you should which is to adopt yourself into group docker:

[email protected]:~$ sudo usermod -aG docker username

Then log out and back in (to instantiate the group membership change).

Docker and certificates

Depending on your environment, you may need special certificates to work with Docker repositories or artifactories. This would happen in the case where your employer had private ones (instead of or along side Docker Hub).

To add certificates for use by Docker on your host, follow How to juggle certificates in Ubuntu (and Mint). There are comments there on installing certificates on CentOS (Red Hat). A most important point to take away is that it will not appear to you that you've accomplished anything until you bounce Docker.

The Dockerfile

Here's a Docker (configuration) file example and some comments on its content.

dockerfile FROM debian:jessie
MAINTAINER Daniel Alan Miller [email protected]
RUN apt-key adv --keyserver --recv-keys 1614552E5765227AEC39EFCFA7E00EF33A8F2399
RUN echo "deb jessie main" > /etc/apt/sources.list.d/rethinkdb.list
RUN apt-get update \ && apt-get install -y rethinkdb=$RETHINKDBPACKAGEVERSION \ && rm -rf /var/lib/apt/lists/*
VOLUME ["/data"]
CMD ["rethinkdb", "--bind", "all"]
EXPOSE 28015 29015 8080
  1. FROM —pulls (includes) dependencies from other Docker image configurations.

  2. MAINTAINER —comment on whose Docker configuration this is.

  3. RUN —defines command to run from within the Docker container once it's created and begun to run. If your image comes with Python, this could be in Python.

  4. ENV —sets an environment variable and exports it in the container.

  5. VOLUME —defines a path in the container that Docker exposes to the host system (the host running Docker) and mapped using the -v argument when launching that container.

  6. WORKDIR —changes the current working directory of the container in case more commands are to be run (in that location).

  7. CMD —runs commands in the format:
    CMD [ [ [ "executable" ] "argument" ] "more arguments" ]
    Ideally, there should only be one instance of CMD. Otherwise, see ENTRYPOINT:
    dockerfile ENTRYPOINT [ "/swarm" ] CMD [ "--help" ]
  8. EXPOSE —expose the listed ports for mapping to the host via the -p option when launching a container.

Docker best practices
  1. Automate everything. Don't assume that the host server will around for a long time. Build in the proper automation for recovering a Docker instance including data on the local filesytem. In a proper Docker environment, hosting hardware/software must be replaced with new instances upon failure and Docker-hosted services must not skip a beat.
  2. Orchestrate containers. Application code must not assume that the executing container will live forever. It must assume that an application will always exist, because that's the purpose anyway, but not rely on (especially a) hardware instance. For example, a database must mount an external—never a local—volume for data files.

Docker limits

Containers will automatically have access to the entire range of RAM and CPU processing power of its host. If you are running a single container, this may not be an issue. When you start hosting multiple containers, each one will than start stepping on each other.

Docker and Snap versus package-manager installation

Do not use Snap. It's cool and snappy, but it makes things more difficult when dealing with everyday write-ups, tutorials, questions on, etc.

The sample, extended installation done here is because I'm interested in the containerization of ELK. This shows you that in the interest of practice application.

  1. If you did use Snap, for example, when offered the option to install Docker as part of the Ubuntu 18.04.1 Bionic Beaver Server installation, do this:
    # snap remove docker
    Install docker
  3. Now do the proper, Debian package installation:
    # apt-get update
    # apt-get install apt-transport-https ca-certificates software-properties-common
    # curl -fsSL | apt-key add -
  4. It may be important to you to verify that you have the key with the fingerprint thus (and you'll see):
    # apt-key fingerprint 0EBFCD88
    pub   rsa4096 2017-02-22 [SCEA]
          9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88*
    uid           [ unknown] Docker Release (CE deb) <[email protected]>
    sub   rsa4096 2017-02-22 [S]
    * As I understand it, this is what you should see when you test the advanced package tools key.
  5. Now set up the stable repository using the key:
    # add-apt-repository "deb [arch=amd64] bionic stable"
    # apt-get update
    # apt-cache policy docker-ce
    # apt-get install docker-ce
    # systemctl status docker.service
    Docker is now running.
  6. Add your username to Docker's group. This avoids having to resort to root in order to run Docker commands (and it's the right thing to do).
    # usermod -aG docker username
    # su - username
    # id -nG username
  7. Now fix vm.max_map_count:
    # sysctl vm.max_map_count=262144
  8. Now pull Sebastien's docker stuff:
    $ docker pull sebp/elk
    Install docker-compose
  10. Install this Docker aid that makes life easier in terms of Docker command lines. You can see this in the YAML file which obviates the starting of Docker with gazillions of rather lengthy options.
    $ wget
    $ chmod a+x docker-compose-Linux-x86_64
    # cp docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
    $ vim ./docker-compose.yml
      image: sebp/elk
        - "5601:5601"
        - "9200:9200"
        - "9300:9300"
        - "5044:5044"
          soft: "65536"
          hard: "65536"
  11. Now try to run Docker via docker-compose:
    $ docker-compose up elk
    This doesn't work complaining that there was no known port. I googled the fool out of this problem (and wondered why I had not encountered it last week when I did this). The solution is ugly and I don't feel super confident that this is even the best way to proceed; however, it works.
  12. I created /etc/systemd/system/docker.service.d/hosts.conf into which I put:
    # vim /etc/systemd/system/docker.service.d/hosts.conf
    ExecStart=/usr/bin/dockerd -H fd:// -H tcp://
    Then, I bounced Docker:
    # systemctl daemon-reload
    # systemctl restart docker.service
    # systemctl status docker.service
    Dropping back down to user russ:
    $ docker-compose up elk
  13. And it was off to the races!

Getting started exercises

I got started (on a recently configured installation of Linux Mint 19) by installing Docker for Part 1:

# apt-get install
# docker --version
Docker version 17.12.1-ce, build 7390fc6
# usermod -aG docker russ
# reboot

Then I began the exercises:

[email protected]:~$ mkdir -p dev/docker-dev
[email protected]:~$ cd dev/docker-dev
[email protected]:~/dev/docker-dev$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
   executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
   to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

This next bit is covered in Part 2.

[email protected]:~/dev/docker-dev/exercise-1$ docker build -t friendlyhello .
Sending build context to Docker daemon   5.12kB
Step 1/7 : FROM python:2.7-slim
 ---> 14dad3ead5f4
Step 2/7 : WORKDIR /app
 ---> Using cache
 ---> 6623958da619
Step 3/7 : COPY . /app
 ---> a6eda813106a
Step 4/7 : RUN pip install --trusted-host -r requirements.txt
 ---> Running in b6e6626dfaac
Collecting Flask (from -r requirements.txt (line 1))
  Downloading (91kB)
Collecting Redis (from -r requirements.txt (line 2))
  Downloading (64kB)
Collecting itsdangerous>=0.24 (from Flask->-r requirements.txt (line 1))
  Downloading (46kB)
Collecting Jinja2>=2.10 (from Flask->-r requirements.txt (line 1))
  Downloading (126kB)
Collecting Werkzeug>=0.14 (from Flask->-r requirements.txt (line 1))
  Downloading (322kB)
Collecting click>=5.1 (from Flask->-r requirements.txt (line 1))
  Downloading (81kB)
Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask->-r requirements.txt (line 1))
Building wheels for collected packages: itsdangerous, MarkupSafe
  Running bdist_wheel for itsdangerous: started
  Running bdist_wheel for itsdangerous: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1...768
  Running bdist_wheel for MarkupSafe: started
  Running bdist_wheel for MarkupSafe: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c6...ffe
Successfully built itsdangerous MarkupSafe
Installing collected packages: itsdangerous, MarkupSafe, Jinja2, Werkzeug, click, Flask, Redis
Successfully installed Flask-1.0.2 Jinja2-2.10 MarkupSafe-1.0 Redis-2.10.6 Werkzeug-0.14.1 click-7.0 itsdangerous-0.24
Removing intermediate container b6e6626dfaac
 ---> 051aa1146d1c
Step 5/7 : EXPOSE 80
 ---> Running in f9d60f536bf4
Removing intermediate container f9d60f536bf4
 ---> e2e49ccb7202
Step 6/7 : ENV NAME World
 ---> Running in 3103174b4a63
Removing intermediate container 3103174b4a63
 ---> f6835ec41a96
Step 7/7 : CMD ["python", ""]
 ---> Running in 42c927ad63ae
Removing intermediate container 42c927ad63ae
 ---> 023c3e07e25c
Successfully built 023c3e07e25c
Successfully tagged friendlyhello:latest
[email protected]:~/dev/docker-dev/exercise-1$ ll
total 20
drwxr-xr-x 2 russ russ 4096 Oct 10 10:10 .
drwxr-xr-x 3 russ russ 4096 Oct 10 09:26 ..
-rw-r--r-- 1 russ russ  679 Oct 10 09:28
-rw-r--r-- 1 russ russ  514 Oct 10 09:26 dockerfile
-rw-r--r-- 1 russ russ   12 Oct 10 10:10 requirements.txt
[email protected]:~/dev/docker-dev/exercise-1$ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
friendlyhello       latest              023c3e07e25c        2 minutes ago       132MB
                            26bada283754        42 minutes ago      120MB
python              2.7-slim            14dad3ead5f4        18 hours ago        120MB
hello-world         latest              4ab4c602aa5e        4 weeks ago         1.84kB
[email protected]:~/dev/docker-dev/exercise-1$ docker run -p 4000:80 friendlyhello
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on (Press CTRL+C to quit) - - [10/Oct/2018 16:14:48] "GET / HTTP/1.1" 200 - - - [10/Oct/2018 16:14:48] "GET /favicon.ico HTTP/1.1" 404 -
[email protected]:~/dev/docker-dev/exercise-1$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over
   to to create one.
Username: windofkeltia
Login Succeeded
[email protected]:~/dev/docker-dev/exercise-1$ docker tag friendlyhello windofkeltia/get-started:part2
[email protected]:~/dev/docker-dev/exercise-1$ docker image ls
REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE
friendlyhello              latest              023c3e07e25c        10 minutes ago      132MB
windofkeltia/get-started   part2               023c3e07e25c        10 minutes ago      132MB
                                   26bada283754        About an hour ago   120MB
python                     2.7-slim            14dad3ead5f4        18 hours ago        120MB
hello-world                latest              4ab4c602aa5e        4 weeks ago         1.84kB
[email protected]:~/dev/docker-dev/exercise-1$ docker push windofkeltia/get-started:part2
The push refers to repository []
f5d5f8e0d82e: Pushed
669e5a5551be: Pushed
c0075ee77d65: Pushed
47c126cf49af: Mounted from library/python
18cc3d97f405: Mounted from library/python
80db77e224a0: Mounted from library/python
8b15606a9e3e: Mounted from library/python
part2: digest: sha256:3603d5196dd1b60df07487f3ce0833a50e830ce5a6065936e057a20025dedd19 size: 1788
[email protected]:~/dev/docker-dev/exercise-1$ docker run -p 4000:80 windofkeltia/get-started:part2
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on (Press CTRL+C to quit) - - [10/Oct/2018 16:35:18] "GET / HTTP/1.1" 200 -

Here's the output (albeit in a browser):

Hello World!
Hostname: c98cf05759f8
Visits: cannot connect to Redis, counter disabled

Docker machine

Here's installing Docker machine:

[email protected]:~/Downloads$ BASE=
[email protected]:~/Downloads$ curl -L $BASE/docker-machine-$(uname -s)-$(uname -m) > ./docker-machine
[email protected]:~/Downloads$ sudo install ./docker-machine /usr/local/bin/docker-machine
[email protected]:~/Downloads$ rm ./docker-machine

* Check path for the latest version.

Must Dockerfile be capitalized?

I have created Dockerfile/dockerfile in lower case on Linux. It still works. Remember that Windows treats filenames insensitively with regard to case making both spellings identical. Allowing lowercase on Linux is probably how Docker gets around any ambiguity.

10 things to avoid in Docker containers

But first, something positive:

  1. Containers are immutable: the same image tests by QA is what reaches production.
  2. Containers are lightweight: the memory footprint is very small since only memory for the main process is ever allocated.
  3. Containers are fast: they usually load as quickly as a typical Linux process, that is, in seconds or less.

However, containers are completely and utterly disposable! They are ephemeral. This fact must condition the mindset of those who develop them. And now for the negative:

  1. Don't store data in containers because they can be stopped, destroyed or replaced.
  2. Don't ship applications in pieces because containers are immutable.
  3. Don't produce a single-layer image, but make effective use of the layered filesystem and base images for the OS you choose. Use another layer for the definition of the username, another for the runtime, and still another for the application. (There's an art to this.)
  4. Don't create images from running containers using docker commit. (This is how most tutorials do it for convenience, but it's misleading.) Use Dockerfile and track changes to that file using git.
  5. Don't use only the :latest tag. It's like using SNAPSHOT in Maven.
  6. Don't run more than one process in a single container because you can't manage or update the processes individually.
  7. Don't store credentials in the image. Duh. Use environment variables.
  8. Don't run processes as root..
  9. Don't rely on IP addresses because each container will have its own, internal address that could change if you start and stop the container (unless you've set up a static one, but that creates other problems of how data works). Use DNS and names that, along with ports, can be communicated via environment variables or other, good methods between containers.

Limiting Docker containers to resources

By default, a container's access to host resources is unlimited. In practice, this might not be desirable. One problem in particular, though it's aberrant, is how Docker and Java issues) Java interprets host resources.

On Linux, it's standard procedure for the OS to throw an OutOfMemoryException (OOM) and begin killing processes to free up memory. This can bring down the entire system, just an application and even a Docker container.

Docker attempts to palliate the risk of a container going down by adjusting the OOM priority on the Docker dæmon to make it less likely that a container go down, but containers will die before the dæmon or even other system processes. Best practice urges you not to attempt to abuse Docker by way of the --oom-score-adj or the --oom-kill-disable options for a container.

Memory limits

Docker can enforce hard memory limits, except for Java as noted, holding a container to no more than a specified amount. Here's the scoop:

  1. Until Java 9, i.e.: Java 8, you have to tell the JVM via command-line options what it's to consider the memory limitation to be.
  2. In Java 9, you still have to use a command-line option to tell Java to respect the container limits.
  3. Beginning at some build in Java 10, probably not before build 23, Java will by default respect the container limitation. A command-line option maybe be used to override this default (if there were a need for that).

Docker Engine relieas on a technology called control groups which limits an application to a specific number of resources and permits Docker to share hardware resources between containers enforcing limits, for example, memory.

Docker options (you should look these up in real documentation):

CPU limits

For knowing the number of CPUs, there is no solution in Java prior to Java 10 after which by default it respects the number set on the container.

Docker options (you should look these up in real documentation):

More best practice tips

When containers play hard to get—missing containers

There are myriad things that can go wrong, but one that bites me all the time is when I forget some ephemeral resource gone missing that's being mounted and shared with a container—like a pile of logfiles I need in testing Filebeat and Logstash:

[email protected]:~/dev/orchestration$ docker stack deploy -c docker-compose.yml acme
[email protected]:~/dev/orchestration$ docker service ls
ID            NAME           MODE         REPLICAS  IMAGE
42uuxvag1d9g  acme-consul    replicated   1/1
yyf85gi8qrnx  acme_filebeat  global       0/1
[email protected]:~/dev/orchestration$ docker service ps acme_filebeat    (looks like Filebeat can't come up: are there any already exited?)
ID            NAME                                        DESIRED STATE CURRENT STATE                     ERROR
6sxuqgstde6g  acme_filebeat.fw17z9s7l7lx5ukvcka04epzj     Ready         Preparing less than a second ago
rjul7um32y1z  \_ acme_filebeat.fw17z9s7l7lx5ukvcka04epzj  Shutdown      Rejected 3 seconds ago            "invalid mount config for type..."
ytirpcbev5p9  \_ acme_filebeat.fw17z9s7l7lx5ukvcka04epzj  Shutdown      Rejected 8 seconds ago            "invalid mount config for type..."
sj5su4rc6r20  \_ acme_filebeat.fw17z9s7l7lx5ukvcka04epzj  Shutdown      Rejected 13 seconds ago           "invalid mount config for type..."
w0yze0zlqbmd  \_ acme_filebeat.fw17z9s7l7lx5ukvcka04epzj  Shutdown      Rejected 18 seconds ago           "invalid mount config for type..."
[email protected]:~/dev/orchestration$ docker ps --format '{{.ID}}\t{{.Image}}\t{{.Status}}' -a | grep acme_filebeat
6sxuqgstde6g  acme_filebeat:latest	Up 7 minutes
qnnu4tainvq6  acme_filebeat	Exited (1) 8 seconds ago           (this one's already exited; let's look at it)
[email protected]:~/dev/orchestration$ docker logs qnnu4tainvq6
2019-02-05T22:39:19Z qnnu4tainvq6  confd[10]: DEBUG (some debugging message)
2019-02-05T22:39:19Z qnnu4tainvq6  confd[10]: INFO (some informational message)
No certificate directory - skipping Certificate Rehash...
filebeat: [emerg] host not found in upstream "acme_filebeat:8083" in /etc/filebeat/conf.d/filebeat.conf:30

Disclaimer: I had to twist the log entry just above to make my point because I added it after losing the data. The point is how to notice a container's not coming up and figure out where it's failing and why. These are some steps.

The reason for the convolution is that docker service ls gives us service ids not container ids and docker logs needs the latter (container) id to work.

Discovering association between containers and virtual interfaces

# From a list of container ids gathered using command docker ps -q,
# reveal which network interface each container is associated with.
# Because of what it must do, this script can only work as root.
# Comments trace through one container example for documentation
# purposes.
# Russell Bateman, 7 January 2019
user=`id -u`
if [ $user -ne 0 ]; then
  echo "This script can only be run as root"
  exit 1
for container in $( docker ps -q ); do
  # $container=b246f3eabb72, fbf88cbcd0fe
  factor=`docker inspect $container --format '{{.State.Pid}}'`
  # $factor=14488, 14433
  factor=`ip netns identify $factor`
  # $factor=ns-14488, 
  if [ -n "$factor" ]; then
    factor=`ip netns list | grep $factor`
    # $factor=ns-14488 (id: 5)
    factor=`echo "$factor" | awk '{print $3}'`
    # $factor=5)
    # $factor=5
    interface=`ip link show | grep -B1 "link-netnsid $factor" | awk '{print $2}'`
    # [email protected]:
    interface=`echo $interface | tr "@" " "`
    # $interface=veth3fb8e9f if22: 52:b1:2d:41:0c:b1
    interface=`echo $interface | awk '{print $1}'`
    # $interface=veth3fb8e9f
    echo "$container: $interface"
    echo "$container: (none)"
# vim: set tabstop=2 shiftwidth=2 expandtab:

Sample output:

[email protected]:~/bin$ sudo ./
b246f3eabb72: veth3fb8e9f
fbf88cbcd0fe: (none)

Labels on nodes in Docker Swarm

To apply labels to a node (my host node is named moria):

$ docker node update            \
    --label-add acme.ks=true    \
    --label-add acme.nginx=true \
    --label-add acme.elk=true   \
moria    (reply from Docker)

To remove a label from a node (my host node is named moria):

$ docker node update --label-rm acme.elk=true moria
moria    (reply from Docker)

To see what labels are on node, do this (my host node is named moria):

$ docker node ls -q | xargs docker node inspect \
>   --format '{{ .ID }} [{{ .Description.Hostname }}]: {{ range $k, $v := .Spec.Labels }}{{ $k }}={{ $v }} {{end}}'
zkxs95a465yzp68egf98rf9xj [moria]: acme.deploy=true acme.elk=true acme.ks=true acme.nginx=true

Tip on debugging image construction

If your image doesn't come up looking like what you're after, you can drop RUN commands in a little like a printf( message ); exit( -1 ) in C. This isn't the funnest way to debug, but it is effective given the alternatives.

Here we create a container, whose sole purpose is to print "Hello world!" when it runs. It does this because of a text file we copy into the image. Let's pretend that this COPY step was not working and we wanted to stop and see whether it worked. Obviously, this technique gains us little in this tiny example, but a longer image exhibiting problems could be short-circuited to allow us to inspect something like this.

FROM       alpine:latest
LABEL      maintainer="russ"
RUN        echo "Hello world!" > /tmp/hello_world
ENTRYPOINT [ "cat", "/tmp/hello_world" ]

With the created image, build and run it:

$ docker build --tag poop .
Sending build context to Docker daemon  14.85kB
Step 1/4 : FROM       alpine:latest
 ---> caf27325b298
Step 2/4 : LABEL      maintainer="russ"
 ---> Using cache
 ---> 2e0123e32f9f
Step 3/4 : RUN        echo "Hello world!" > /tmp/hello_world
 ---> Running in a5ec38cab8cd
Removing intermediate container a5ec38cab8cd
 ---> 68101b8bea3b
Step 4/4 : ENTRYPOINT [ "cat", "/tmp/hello_world" ]
 ---> Running in a49af321926d
Removing intermediate container a49af321926d
 ---> d234fc3a6377
Successfully built d234fc3a6377
Successfully tagged poop:latest
$ docker run -it poop
Hello world!

Sure, it builds and runs perfectly. However, let's say that we were having trouble with steps prior to RUN and wanted to debug our image to that point. Change Dockerfile to exit thus:

FROM       alpine:latest
LABEL      maintainer="russ"
RUN        echo "Hello world!" > /tmp/hello_world ; false ; exit 0
ENTRYPOINT [ "cat", "/tmp/hello_world" ]

Then build it.

$ docker build --tag poop .
Sending build context to Docker daemon  14.85kB
Step 1/4 : FROM       alpine:latest
 ---> caf27325b298
Step 2/4 : LABEL      maintainer="russ"
 ---> Using cache
 ---> 2e0123e32f9f
Step 3/4 : RUN        echo "Hello world!" > /tmp/hello_world ; false ; exit 0
 ---> Using cache
 ---> b37971c6332e
Step 4/4 : ENTRYPOINT [ "cat", "/tmp/hello_world" ]
 ---> Using cache
 ---> 275408c38210
Successfully built 275408c38210
Successfully tagged poop:latest

Now run it as below. Option --entrypoint overwrites whatever ENTRYPOINT we specified in Dockerfile. In our case, we're going to tell it not to execute cat /tmp/hello_world, but, instead, to bring up the Bourne shell (the Alpine image doesn't offer bash) so we can look around. We could have done things a little differently using docker exec, but this short-circuits execution and gets us directly inside the container where the error can be looked at.

So, in case this is confusing, what we're doing now is:

  1. Instructing the image to stop container configuration at the end of our doctored RUN command.
  2. Replacing ENTRYPOINT (which, in a more complex image might be a lot further down instead of right after RUN).
  3. Coming up in the container which will only be built as far as that RUN command we doctored.

We can examine whether hello_world got created on /tmp, for example:

$ docker run -it --entrypoint /bin/sh poop
/ # alias ll='ls -alg'
/ # ll /tmp
total 12
drwxrwxrwt    1 root          4096 Feb  8 16:46 .
drwxr-xr-x    1 root          4096 Feb  8 16:54 ..
-rw-r--r--    1 root            13 Feb  8 16:46 hello_world
/ # cat /tmp/hello_world
Hello world!
/ #