Press "Enter" to skip to content

Tag: Docker

Minimalistic Docker containers in 3 tricks

One of the common mistakes we commit when starting with Docker is not to take into account the size of a produced container. This usually happens due to our goal to develop fast and easy for our own personal use.
But when we want to push a container to the public registries or when we want to distribute it in our company the size of the container can have a big impact on the discussions on why is so awesome to work in virtualized environments.

I want to propose you 3 tricks to minimize your containers.

1 – Clean up after every command

Is not easy to see but docker thinks about each command of your Dockerfile as a layer. Each layer is a folder by itself that gets overlayed on top of the rest and will keep all the information produced by the command you executed.

Is typical to start your Dockerfile by writting something like:

FROM ubuntu
RUN apt-get update
RUN apt-get install -y wget

This simple command will produce 2 separated layers. The first will contain all the indexes comming from the apt-get update and the second layer will install your package.
As you may imagine you don’t want the indexes laying on your container so you are maybe tempted to do a cleanup afterwards.

FROM ubuntu
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get clean

But the damage is already done. The first layer will never change since it has been built. Even if you check now, the container won’t contain the indexes, but the layer will contain them, ocupying precious space.

The trick is to atomize the commands including all the required cleanups after it. Remember that every command in a Dockerfile is a layer!

A much better Dockerfile will do:

FROM ubuntu
RUN apt-get update \
    && apt-get install -y wget \
    && apt-get clean
  • You can split lines in a Dockerfile just appending \ at the end.
  • && won’t exend the duration of the build after a previous command fails.

2 – Choose your base image wisely

Some images are better designed to be small. A proof of it may be the alpine image available as open source image. The distribution package manager apk is well designed for avoiding any unwanted cache generated. Looking at the same process getting wget in alpine will look much cleaner and the resulting layer is more likely to be smaller.

FROM alpine
RUN apk add --no-cache wget
  • Choosing a different base image may change the package names you need or even the locations you expect the files to be installed.

3 – Build in another container

This is my favourite trick.

If you need to compile some program instead of doing all the process in the same container just create a temporary one and then just copy the results to the final one.

# The keyword in this line is 'as'
FROM alpine:latest as temp

RUN apk add --no-cache git
RUN git clone --recursive https://github.com/vysheng/tg

RUN apk add --no-cache libconfig-dev readline-dev libexecinfo-dev \
                       python-dev openssl-dev libevent-dev \
                       jansson-dev lua-dev
RUN apk add --no-cache g++ make

# Actual build
RUN ./tg/configure; make

# Final image
FROM alpine:latest

RUN apk add --no-cache libevent jansson libconfig libexecinfo \
                       readline lua openssl

COPY --from=temp /bin/telegram-cli /bin/telegram-cli

ENTRYPOINT ["/bin/telegram-cli"]

This container I wrote for having a containerized telegram-cli is my example on minimalistic containers.

  1. FROM alpine:latest as temp This first line will create a temporary container named temp that can be used to copy things from as described in the reference. This is quite handy to create temporary build environments that are later discarded.

  2. RUN apt add --no-cache libconfig-dev ... Installs the required dev dependencies containing in this case the header files and all the required build information. And that is something we don’t want in our final container!

  3. FROM alpine:latest the next FROM command that will create the final image.

  4. RUN apt add --no-cache libconfig ... now is time to install the runtime dependencies only.

  5. COPY --from=temp /bin/telegram-cli ... This is where the magic happens: We can use COPY --from=temp to specify the docker daemon to perform the copy from a previously named container. So we can copy the built program from a build container to a runtime container.

Docker source hardcoded to the docker hub

I just needed to complain about this.

Docker, the beloved containerdization tool is not as open as they claim it to be. The main source of containers is hardcoded in the tool to point towards the docker hub, and that site is now converting to a market place

Is the first time I see a package management system that forces you to use a specific source and I’m not happy with the decission.

I have my own registry and I want to use it as my default!

Chef and Docker for a rapid infrastructure development

We started using Chef a while ago and one of the first steps we took was to use Docker instead of Vagrant for performing tests due to it’s faster setup.
After all this time I can say it was a nice experience and now our CI is happily testing our in minutes. So…

What do you need?

Basically you need to install the latest ChefDK where the gem kitchen-dokken is installed by default. This gem enables a light-weight tooling to use Docker containers for executing Kitchen.

Setup

After that you just need to setup your kitchen.yml to use dokken as driver like so:

---
driver:
  name: dokken
  chef_version: latest

transport:
  name: dokken

provisioner:
name: dokken

...

The transport and the provisioner are set to dokken so kitchen will use the lighter tooling from the driver. Then you can setup your platform to test your cookbooks:

  - name: ubuntu-16.04
    driver:
      image: ubuntu:16.04
      pid_one_command: /bin/systemd
      intermediate_instructions:
        - RUN /usr/bin/apt-get update

Considerations

  • Docker is designed for isolating and packaging processes that runs as the only process inside a container.
    If your cookbook setup services you may need to choose a complete Docker base image that normally is bigger in size, also you need to explicitly start the system daemon (such as systemd) like you saw on the code snippet earlyer.

  • Do not try to run Docker inside a container. If your cookbook uses Docker somehow is better to use Vagrant instead because then you may need to manually setup the container to host docker and that is a pain.