Press "Enter" to skip to content

Roadmap Posts

Minimalistic Docker containers in 3 tricks

One of the common mistakes we commit when starting with Docker is not taking into account the size of a produced container. This usually happens due to our goal to develop fast and easily for our own personal use.
But when we want to push a container to the public registries or when we want to distribute it in our company the size of the container can have a big impact on the discussions on why is so awesome to work in virtualized environments.

I want to propose you 3 tricks to minimize your containers.

1 – Clean up after every command

Is not easy to see but docker thinks about each command of your Dockerfile as a layer. Each layer is a folder by itself that gets overlayed on top of the rest and will keep all the information produced by the command you executed.

Is typical to start your Dockerfile by writing something like:

FROM ubuntu
RUN apt-get update
RUN apt-get install -y wget

This simple command will produce 2 separated layers. The first will contain all the indexes coming from the apt-get update and the second layer will install your package.
As you may imagine you don’t want the indexes laying on your container so you are maybe tempted to do a cleanup afterward.

FROM ubuntu
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get clean

But the damage is already done. The first layer will never change since it has been built. Even if you check now, the container won’t contain the indexes, but the layer will contain them, occupying precious space.

The trick is to atomize the commands including all the required cleanups after it. Remember that every command in a Dockerfile is a layer!

A much better Dockerfile will do:

FROM ubuntu
RUN apt-get update \
    && apt-get install -y wget \
    && apt-get clean
  • You can split lines in a Dockerfile just appending \ at the end.
  • && won’t extend the duration of the build after a previous command fails.

2 – Choose your base image wisely

Some images are better designed to be small. Proof of it may be the alpine image available as the open source image. The distribution package manager apk is well-designed for avoiding any unwanted cache generated. Looking at the same process of getting wget in alpine will look much cleaner and the resulting layer is more likely to be smaller.

FROM alpine
RUN apk add --no-cache wget
  • Choosing a different base image may change the package names you need or even the locations you expect the files to be installed.

3 – Build in another container

This is my favorite trick.

If you need to compile some program instead of doing all the processes in the same container just create a temporary one and then just copy the results to the final one.

# The keyword in this line is 'as'
FROM alpine:latest as temp

RUN apk add --no-cache git
RUN git clone --recursive

RUN apk add --no-cache libconfig-dev readline-dev libexecinfo-dev \
                       python-dev openssl-dev libevent-dev \
                       jansson-dev lua-dev
RUN apk add --no-cache g++ make

# Actual build
RUN ./tg/configure; make

# Final image
FROM alpine:latest

RUN apk add --no-cache libevent jansson libconfig libexecinfo \
                       readline lua openssl

COPY --from=temp /bin/telegram-cli /bin/telegram-cli

ENTRYPOINT ["/bin/telegram-cli"]

This container I wrote for having a containerized telegram-cli is my example of minimalistic containers.

  1. FROM alpine:latest as temp This first line will create a temporary container named temp that can be used to copy things from as described in the reference. This is quite handy for creating temporary build environments that are later discarded.
  2. RUN apt add --no-cache libconfig-dev ... Installs the required dev dependencies containing in this case the header files and all the required build information. And that is something we don’t want in our final container!
  3. FROM alpine:latest the next FROM the command that will create the final image.
  4. RUN apt add --no-cache libconfig ... now is the time to install the runtime dependencies only.
  5. COPY --from=temp /bin/telegram-cli ... This is where the magic happens: We can use COPY --from=temp to specify the docker daemon to perform the copy from a previously named container. So we can copy the built program from a build container to a runtime container.

Docker source hardcoded to the docker hub

I just needed to complain about this.

Docker, the beloved containerization tool is not as open as they claim it to be. The primary source of containers is hardcoded in the tool to point toward the docker hub, and that site is now converting to a marketplace

This is the first time I have seen a package management system that forces you to use a specific source and I’m not happy with the decision.

I have my own registry and I want to use it as my default!

5 reasons why you should learn Make

Maybe you have felt from time to time the speed of the current technology development. Maybe you have taken decisions like “I will choose the latest technology for my new project”. And maybe in the middle of the development, you realized your decisions delay you because the new technologies you have chosen are… Well, new.

I always had a curiosity for old and reliable technologies so when I started years ago with Linux and software automation I decided to learn Make. I saw make as dark voodoo magic that was used all over the place to build C programs, but then I start reading the manual. It wasn’t an easy choice. Big manual, lots of work to do, deadlines… But I did it. And now I can say I know Make and I have a super-tool for automation. I even published a condensed cheatsheet that I use every day I need to work in makefiles.

I want to share with you why I think you need to pay attention to this amazing tool right now and if you find it interesting check out the introduction I’m writing.

1.- Is standard

It conforms to section 6.2 of IEEE Standard 1003.2-1992 (POSIX.2). So the basics are there for you, no matter where you execute your makefiles. Particularly useful is the GNU version of make where some clever features have been added though.

It always goes straight to the point.
No surprises.
No random failures.
No dependencies.
No additional requirements.

2.- Is compact and clean

Only two types of statements: Variable assignments and rules. Additionally, rules are written as recipes, those you follow at home preparing a nice risotto.

VARIABLE := value

dish: ingredient1 ingredient2 ingredient3

But take a look at the real thing and try to get the pattern. It doesn’t look that complex, isn’t it?

PYTHON_EXEC             := python3
DEVPI_SERVER_ADDRESS    := localhost:3141

python.release: python.check
    $(PYTHON_EXEC) sdist
    devpi use $(DEVPI_SERVER_ADDRESS)
    devpi login admin --password admin1234
    devpi use root\/dev
    devpi upload dist/$(PROJECT_NAME)-$(PROJECT_VERSION).tar.gz
    devpi logoff

3.- Is smart

If you write your makefiles properly Make will always perform the minimum amount of operations to achieve any goal. As it checks the timestamps of files you have generated if the files you are generating are older than the ingredient files then Make will not do anything for that file.

4.- Is insanely fast

Parsing all recipes and variables takes no time. It is so fast that in Linux you can use Shell auto-completion instantaneously. And believe me: if you have thousands of files this makes your life much easier.

me@mypc:~/test_folder                                       git master
$ cat Makefile
TARGETS:= $(wildcard *.txt)

        echo '$@'
me@mypc:~/test_folder                                       git master
$ make test {{TAB TAB}}
test01.txt  test03.txt  test05.txt  test07.txt  test09.txt  test11.txt
test13.txt  test15.txt  test17.txt  test19.txt  test21.txt  test23.txt 
test25.txt  test27.txt  test29.txt  test02.txt  test04.txt  test06.txt 
test08.txt  test10.txt  test12.txt  test14.txt  test16.txt  test18.txt
test20.txt  test22.txt  test24.txt  test26.txt  test28.txt  test30.txt
me@mypc:~/test_folder                                       git master
$ make test

Learn how to get this awesome 2 line prompt with SVC integration

5.- It will surprise you every day

Since I started using Make there is no day when I realize cleaver features that were embedded in it that can be used to automate any process. Sometimes when I realize some feature will be very useful I just look into the manual and is already implemented for me to use! No other language I ever used achieved that level of convenience for me before.

In conclusion, Make for me is a really powerful tool that has been a bit undervalued by young developers (I think due to its apparent initial time investment). But when you know it, you’ll see that rapidly pays off the investment.

Make is a beautiful tool.