Docker Notes

Russell Bateman
September 2016
last update:

Quick Docker notes in preparation for beginning to make use of this technology.

The point of container technologies is to wrap up a software implementation in a complete filesystem and operating environment that contains everything it needs to run: the application itself, runtime, system tools and libraries—anything and everything you install on a server. This guarantees that the implementation will always run the same regardless of the environment it is running in. Um, well, in theory.

Docker is a tool that is designed to benefit both developers and system administrators. This makes it the essential quiver in DevOps' weaponry or toolstack. For developers especially it means not having to focus on writing code that must take into account the greater environment (operating system, tools and frameworks present, etc.). Another benefit is the many, prebuilt, open-source containers published that already do something a developer wants to do.

Elasticsearch, Logstash, Kibana (ELK) Docker image documentation comes to mind as an excellent example of this.

Docker links

Docker versions
Version Release date
18.09 11/08/2018
18.06.0-ce 07/18/2018
18.05.0-ce 05/09/2018
18.04.0-ce 04/10/2018
18.02.0-ce 02/07/2018
18.01.0-ce 01/10/2018
17.11.0-ce 11/20/2017
17.10.0-ce 10/17/2017
17.07.0-ce 08/29/2017
17.05.0-ce 05/04/2017
17.04.0-ce 04/03/2017
17.03.0-ce 03/01/2017
1.13.0 01/18/2017
1.12.0 07/28/2016
1.11.0 04/12/2016
1.10.0 02/04/2016
1.9.0 10/29/2015
1.8.0 08/11/2015
1.7.0 06/18/2015
1.6.0 04/16/2015

Glossary it occurs to me to add to this:

Term Definition
bind mount the mounting of a small piece of the host filesytem into a container; more limited than (mounting) a volume. This is the recommended method of sharing configuration files between the host machine and the container. (Configuration file content commonly varies between instantions of running containers.) For example, /etc/resolv.conf could be a useful bind mount in a container.

If your current working directory were ~/Downloads and you had filebeat.yml there, assuming the running executable in the Docker container was looking for this file, the following command line would provide (expose) it (and only this one file of the host's filesystem):

$ docker run --mount type=bind,source="$(pwd)"/filebeat.yml,\
    target=/usr/share/filebeat/filebeat.yml \
volume mount the mounting of an entire volume from the host filesysem into a container. This is the recommended method of sharing data between different containers when that is necessary. Volumes mounted are usually filesystems off path /var/lib/docker/volumes/. In the example below, (read-oinly) HTML content is set up for use by nginx running in a container:
$ docker run --volume nginx:/usr/share/nginx/html:ro nginx:latest

When you run docker inspect, you see (in the Mounts section):

"Mounts": [
    "Type": "volume",
    "Name": "nginx",
    "Source": "/var/lib/docker/volumes/nginx/_data",
    "Destination": "/usr/share/nginx/html",
    "Driver": "local",
    "Mode": "",
    "RW": false,
    "Propagation": ""
Docker Compose a tool for defining and running multi-container Docker applications. You use a YAML file, docker-compose.yml, to configure your application's services. With a single command, docker-compose up, you create and start all the services from your configuration.
Docker Swarm a clustering and scheduling tool for Docker containers. With Swarm, administrators and developers can establish and manage an entire cluster of Docker nodes as a single, virtual system.

Install Docker on Ubuntu 18.04.1 Server

Docker isn't available from the usual repositories.

# apt-get update
# apt-get install apt-transport-https ca-certificates curl software-properties-common
# curl -fsSL | apt-key add -
# add-apt-repository "deb [arch=amd64] bionic stable"
# apt-get update
# apt-cache policy docker-ce
# apt-get install docker-ce

This enabled the Docker dæmon to start on boot, but you can check its status:

# systemctl status docker

Add your username to Docker's group. This avoids having to resort to root in order to run any docker command:

# usermod -aG docker username
# su - username               # to set the group
$ id -nG username

Adopt user into group docker

When you try the Docker hello-world example, it fails:

[email protected]:~$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at
unix /var/run/docker.sock: connect: permission denied.

This is because:

[email protected]:~$ ll /var/run/docker.sock
srw-rw---- 1 root docker 0 Oct 10 08:48 /var/run/docker.sock

You can use sudo to run the example or you can do what you should which is to adopt yourself into group docker:

[email protected]:~$ sudo usermod -aG docker username

Then log out and back in (to instantiate the group membership change).

Docker and certificates

Depending on your environment, you may need special certificates to work with Docker repositories or artifactories. This would happen in the case where your employer had private ones (instead of or along side Docker Hub).

To add certificates for use by Docker on your host, follow How to juggle certificates in Ubuntu (and Mint). There are comments there on installing certificates on CentOS (Red Hat). A most important point to take away is that it will not appear to you that you've accomplished anything until you bounce Docker.

The Dockerfile

Here's a Docker (configuration) file example and some comments on its content.

dockerfile FROM debian:jessie
MAINTAINER Daniel Alan Miller [email protected]
RUN apt-key adv --keyserver --recv-keys 1614552E5765227AEC39EFCFA7E00EF33A8F2399
RUN echo "deb jessie main" > /etc/apt/sources.list.d/rethinkdb.list
RUN apt-get update \ && apt-get install -y rethinkdb=$RETHINKDBPACKAGEVERSION \ && rm -rf /var/lib/apt/lists/*
VOLUME ["/data"]
CMD ["rethinkdb", "--bind", "all"]
EXPOSE 28015 29015 8080
  1. FROM —pulls (includes) dependencies from other Docker image configurations.

  2. MAINTAINER —comment on whose Docker configuration this is.

  3. RUN —defines command to run from within the Docker container once it's created and begun to run. If your image comes with Python, this could be in Python.

  4. ENV —sets an environment variable and exports it in the container.

  5. VOLUME —defines a path in the container that Docker exposes to the host system (the host running Docker) and mapped using the -v argument when launching that container.

  6. WORKDIR —changes the current working directory of the container in case more commands are to be run (in that location).

  7. CMD —runs commands in the format:
    CMD [ [ [ "executable" ] "argument" ] "more arguments" ]
    Ideally, there should only be one instance of CMD. Otherwise, see ENTRYPOINT:
    dockerfile ENTRYPOINT [ "/swarm" ] CMD [ "--help" ]
  8. EXPOSE —expose the listed ports for mapping to the host via the -p option when launching a container.

Docker best practices
  1. Automate everything. Don't assume that the host server will around for a long time. Build in the proper automation for recovering a Docker instance including data on the local filesytem. In a proper Docker environment, hosting hardware/software must be replaced with new instances upon failure and Docker-hosted services must not skip a beat.
  2. Orchestrate containers. Application code must not assume that the executing container will live forever. It must assume that an application will always exist, because that's the purpose anyway, but not rely on (especially a) hardware instance. For example, a database must mount an external—never a local—volume for data files.

Docker limits

Containers will automatically have access to the entire range of RAM and CPU processing power of its host. If you are running a single container, this may not be an issue. When you start hosting multiple containers, each one will than start stepping on each other.

Docker and Snap versus package-manager installation

Do not use Snap. It's cool and snappy, but it makes things more difficult when dealing with everyday write-ups, tutorials, questions on, etc.

The sample, extended installation done here is because I'm interested in the containerization of ELK. This shows you that in the interest of practice application.

  1. If you did use Snap, for example, when offered the option to install Docker as part of the Ubuntu 18.04.1 Bionic Beaver Server installation, do this:
    # snap remove docker
    Install docker
  3. Now do the proper, Debian package installation:
    # apt-get update
    # apt-get install apt-transport-https ca-certificates software-properties-common
    # curl -fsSL | apt-key add -
  4. It may be important to you to verify that you have the key with the fingerprint thus (and you'll see):
    # apt-key fingerprint 0EBFCD88
    pub   rsa4096 2017-02-22 [SCEA]
          9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88*
    uid           [ unknown] Docker Release (CE deb) <[email protected]>
    sub   rsa4096 2017-02-22 [S]
    * As I understand it, this is what you should see when you test the advanced package tools key.
  5. Now set up the stable repository using the key:
    # add-apt-repository "deb [arch=amd64] bionic stable"
    # apt-get update
    # apt-cache policy docker-ce
    # apt-get install docker-ce
    # systemctl status docker.service
    Docker is now running.
  6. Add your username to Docker's group. This avoids having to resort to root in order to run Docker commands (and it's the right thing to do).
    # usermod -aG docker username
    # su - username
    # id -nG username
  7. Now fix vm.max_map_count:
    # sysctl vm.max_map_count=262144
  8. Now pull Sebastien's docker stuff:
    $ docker pull sebp/elk
    Install docker-compose
  10. Install this Docker aid that makes life easier in terms of Docker command lines. You can see this in the YAML file which obviates the starting of Docker with gazillions of rather lengthy options.
    $ wget
    $ chmod a+x docker-compose-Linux-x86_64
    # cp docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
    $ vim ./docker-compose.yml
      image: sebp/elk
        - "5601:5601"
        - "9200:9200"
        - "9300:9300"
        - "5044:5044"
          soft: "65536"
          hard: "65536"
  11. Now try to run Docker via docker-compose:
    $ docker-compose up elk
    This doesn't work complaining that there was no known port. I googled the fool out of this problem (and wondered why I had not encountered it last week when I did this). The solution is ugly and I don't feel super confident that this is even the best way to proceed; however, it works.
  12. I created /etc/systemd/system/docker.service.d/hosts.conf into which I put:
    # vim /etc/systemd/system/docker.service.d/hosts.conf
    ExecStart=/usr/bin/dockerd -H fd:// -H tcp://
    Then, I bounced Docker:
    # systemctl daemon-reload
    # systemctl restart docker.service
    # systemctl status docker.service
    Dropping back down to user russ:
    $ docker-compose up elk
  13. And it was off to the races!

Getting started exercises

I got started (on a recently configured installation of Linux Mint 19) by installing Docker for Part 1:

# apt-get install
# docker --version
Docker version 17.12.1-ce, build 7390fc6
# usermod -aG docker russ
# reboot

Then I began the exercises:

[email protected]:~$ mkdir -p dev/docker-dev
[email protected]:~$ cd dev/docker-dev
[email protected]:~/dev/docker-dev$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
   executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
   to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

This next bit is covered in Part 2.

[email protected]:~/dev/docker-dev/exercise-1$ docker build -t friendlyhello .
Sending build context to Docker daemon   5.12kB
Step 1/7 : FROM python:2.7-slim
 ---> 14dad3ead5f4
Step 2/7 : WORKDIR /app
 ---> Using cache
 ---> 6623958da619
Step 3/7 : COPY . /app
 ---> a6eda813106a
Step 4/7 : RUN pip install --trusted-host -r requirements.txt
 ---> Running in b6e6626dfaac
Collecting Flask (from -r requirements.txt (line 1))
  Downloading (91kB)
Collecting Redis (from -r requirements.txt (line 2))
  Downloading (64kB)
Collecting itsdangerous>=0.24 (from Flask->-r requirements.txt (line 1))
  Downloading (46kB)
Collecting Jinja2>=2.10 (from Flask->-r requirements.txt (line 1))
  Downloading (126kB)
Collecting Werkzeug>=0.14 (from Flask->-r requirements.txt (line 1))
  Downloading (322kB)
Collecting click>=5.1 (from Flask->-r requirements.txt (line 1))
  Downloading (81kB)
Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask->-r requirements.txt (line 1))
Building wheels for collected packages: itsdangerous, MarkupSafe
  Running bdist_wheel for itsdangerous: started
  Running bdist_wheel for itsdangerous: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1...768
  Running bdist_wheel for MarkupSafe: started
  Running bdist_wheel for MarkupSafe: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c6...ffe
Successfully built itsdangerous MarkupSafe
Installing collected packages: itsdangerous, MarkupSafe, Jinja2, Werkzeug, click, Flask, Redis
Successfully installed Flask-1.0.2 Jinja2-2.10 MarkupSafe-1.0 Redis-2.10.6 Werkzeug-0.14.1 click-7.0 itsdangerous-0.24
Removing intermediate container b6e6626dfaac
 ---> 051aa1146d1c
Step 5/7 : EXPOSE 80
 ---> Running in f9d60f536bf4
Removing intermediate container f9d60f536bf4
 ---> e2e49ccb7202
Step 6/7 : ENV NAME World
 ---> Running in 3103174b4a63
Removing intermediate container 3103174b4a63
 ---> f6835ec41a96
Step 7/7 : CMD ["python", ""]
 ---> Running in 42c927ad63ae
Removing intermediate container 42c927ad63ae
 ---> 023c3e07e25c
Successfully built 023c3e07e25c
Successfully tagged friendlyhello:latest
[email protected]:~/dev/docker-dev/exercise-1$ ll
total 20
drwxr-xr-x 2 russ russ 4096 Oct 10 10:10 .
drwxr-xr-x 3 russ russ 4096 Oct 10 09:26 ..
-rw-r--r-- 1 russ russ  679 Oct 10 09:28
-rw-r--r-- 1 russ russ  514 Oct 10 09:26 dockerfile
-rw-r--r-- 1 russ russ   12 Oct 10 10:10 requirements.txt
[email protected]:~/dev/docker-dev/exercise-1$ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
friendlyhello       latest              023c3e07e25c        2 minutes ago       132MB
                            26bada283754        42 minutes ago      120MB
python              2.7-slim            14dad3ead5f4        18 hours ago        120MB
hello-world         latest              4ab4c602aa5e        4 weeks ago         1.84kB
[email protected]:~/dev/docker-dev/exercise-1$ docker run -p 4000:80 friendlyhello
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on (Press CTRL+C to quit) - - [10/Oct/2018 16:14:48] "GET / HTTP/1.1" 200 - - - [10/Oct/2018 16:14:48] "GET /favicon.ico HTTP/1.1" 404 -
[email protected]:~/dev/docker-dev/exercise-1$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over
   to to create one.
Username: windofkeltia
Login Succeeded
[email protected]:~/dev/docker-dev/exercise-1$ docker tag friendlyhello windofkeltia/get-started:part2
[email protected]:~/dev/docker-dev/exercise-1$ docker image ls
REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE
friendlyhello              latest              023c3e07e25c        10 minutes ago      132MB
windofkeltia/get-started   part2               023c3e07e25c        10 minutes ago      132MB
                                   26bada283754        About an hour ago   120MB
python                     2.7-slim            14dad3ead5f4        18 hours ago        120MB
hello-world                latest              4ab4c602aa5e        4 weeks ago         1.84kB
[email protected]:~/dev/docker-dev/exercise-1$ docker push windofkeltia/get-started:part2
The push refers to repository []
f5d5f8e0d82e: Pushed
669e5a5551be: Pushed
c0075ee77d65: Pushed
47c126cf49af: Mounted from library/python
18cc3d97f405: Mounted from library/python
80db77e224a0: Mounted from library/python
8b15606a9e3e: Mounted from library/python
part2: digest: sha256:3603d5196dd1b60df07487f3ce0833a50e830ce5a6065936e057a20025dedd19 size: 1788
[email protected]:~/dev/docker-dev/exercise-1$ docker run -p 4000:80 windofkeltia/get-started:part2
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on (Press CTRL+C to quit) - - [10/Oct/2018 16:35:18] "GET / HTTP/1.1" 200 -

Here's the output (albeit in a browser):

Hello World!
Hostname: c98cf05759f8
Visits: cannot connect to Redis, counter disabled

Docker-compose YAML for exercise above

At this point, we're looking at Part 3 of the Get Started Docker introductory exercises.

First, install docker-compose:

[email protected]:~/Downloads$ wget
[email protected]:~/Downloads$ chmod a+x docker-compose-Linux-x86_64
[email protected]:~/Downloads$ sudo cp docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
[email protected]:~/Downloads$ which docker-compose
[email protected]:~/Downloads$ docker-compose --version
docker-compose version 1.22.0, build f46880fe
[email protected]:~/Downloads$ rm docker-compose-Linux-x86_64

Here's the docker-compose YAML file for the contained application we created earlier (see note above).

version: "3"
    image: windofkeltia/get-started:part2
      replicas: 5
          cpus: "0.1"
          memory: 50M
        condition: on-failure
      - "4000:80"
      - webnet

Repeating the useful, guiding explanations, this is what the YAML file accomplishes:

  1. Pull the image, the one we uploaded ourselves to the registry, from that registry.
  2. Run 5 instances of that image as a service called web, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM.
  3. Immediately restart containers if one fails.
  4. Map port 4000 on the host to web's port 80.
  5. Instruct web's containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves publish to web's port 80 at an ephemeral port.)
  6. Define the webnet network with the default settings (which is a load-balanced overlay network).

Docker machine

Here's installing Docker machine:

[email protected]:~/Downloads$ BASE=
[email protected]:~/Downloads$ curl -L $BASE/docker-machine-$(uname -s)-$(uname -m) > ./docker-machine
[email protected]:~/Downloads$ sudo install ./docker-machine /usr/local/bin/docker-machine
[email protected]:~/Downloads$ rm ./docker-machine

* Check path for the latest version.

Must Dockerfile be capitalized?

I have created Dockerfile/dockerfile in lower case on Linux. It still works. Remember that Windows treats filenames insensitively with regard to case making both spellings identical. Allowing lowercase on Linux is probably how Docker gets around any ambiguity.

10 things to avoid in Docker containers

But first, something positive:

  1. Containers are immutable: the same image tests by QA is what reaches production.
  2. Containers are lightweight: the memory footprint is very small since only memory for the main process is ever allocated.
  3. Containers are fast: they usually load as quickly as a typical Linux process, that is, in seconds or less.

However, containers are completely and utterly disposable! They are ephemeral. This fact must condition the mindset of those who develop them. And now for the negative:

  1. Don't store data in containers because they can be stopped, destroyed or replaced.
  2. Don't ship applications in pieces because containers are immutable.
  3. Don't produce a single-layer image, but make effective use of the layered filesystem and base images for the OS you choose. Use another layer for the definition of the username, another for the runtime, and still another for the application. (There's an art to this.)
  4. Don't create images from running containers using docker commit. (This is how most tutorials do it for convenience, but it's misleading.) Use Dockerfile and track changes to that file using git.
  5. Don't use only the :latest tag. It's like using SNAPSHOT in Maven.
  6. Don't run more than one process in a single container because you can't manage or update the processes individually.
  7. Don't store credentials in the image. Duh. Use environment variables.
  8. Don't run processes as root..
  9. Don't rely on IP addresses because each container will have its own, internal address that could change if you start and stop the container (unless you've set up a static one, but that creates other problems of how data works). Use DNS and names that, along with ports, can be communicated via environment variables or other, good methods between containers.