Press "Enter" to skip to content

Tag: DevOps

5 reasons why you should learn Make

Maybe you have felt from time to time the speed of the current technology development. Maybe you have taken decisions like “I will choose the latest technology for my new project”. And maybe in the middle of the development, you realized your decisions delay you because the new technologies you have chosen are… Well, new.

I always had a curiosity for old and reliable technologies so when I started years ago with Linux and software automation I decided to learn Make. I saw make as dark voodoo magic that was used all over the place to build C programs, but then I start reading the manual. It wasn’t an easy choice. Big manual, lots of work to do, deadlines… But I did it. And now I can say I know Make and I have a super-tool for automation. I even published a condensed cheatsheet that I use every day I need to work in makefiles.

I want to share with you why I think you need to pay attention to this amazing tool right now and if you find it interesting check out the introduction I’m writing.

1.- Is standard

It conforms to section 6.2 of IEEE Standard 1003.2-1992 (POSIX.2). So the basics are there for you, no matter where you execute your makefiles. Particularly useful is the GNU version of make where some clever features have been added though.

It always goes straight to the point.
No surprises.
No random failures.
No dependencies.
No additional requirements.

2.- Is compact and clean

Only two types of statements: Variable assignments and rules. Additionally, rules are written as recipes, those you follow at home preparing a nice risotto.

VARIABLE := value

dish: ingredient1 ingredient2 ingredient3

But take a look at the real thing and try to get the pattern. It doesn’t look that complex, isn’t it?

PYTHON_EXEC             := python3
DEVPI_SERVER_ADDRESS    := localhost:3141

python.release: python.check
    $(PYTHON_EXEC) sdist
    devpi use $(DEVPI_SERVER_ADDRESS)
    devpi login admin --password admin1234
    devpi use root\/dev
    devpi upload dist/$(PROJECT_NAME)-$(PROJECT_VERSION).tar.gz
    devpi logoff

3.- Is smart

If you write your makefiles properly Make will always perform the minimum amount of operations to achieve any goal. As it checks the timestamps of files you have generated if the files you are generating are older than the ingredient files then Make will not do anything for that file.

4.- Is insanely fast

Parsing all recipes and variables takes no time. It is so fast that in Linux you can use Shell auto-completion instantaneously. And believe me: if you have thousands of files this makes your life much easier.

me@mypc:~/test_folder                                       git master
$ cat Makefile
TARGETS:= $(wildcard *.txt)

        echo '$@'
me@mypc:~/test_folder                                       git master
$ make test {{TAB TAB}}
test01.txt  test03.txt  test05.txt  test07.txt  test09.txt  test11.txt
test13.txt  test15.txt  test17.txt  test19.txt  test21.txt  test23.txt 
test25.txt  test27.txt  test29.txt  test02.txt  test04.txt  test06.txt 
test08.txt  test10.txt  test12.txt  test14.txt  test16.txt  test18.txt
test20.txt  test22.txt  test24.txt  test26.txt  test28.txt  test30.txt
me@mypc:~/test_folder                                       git master
$ make test

Learn how to get this awesome 2 line prompt with SVC integration

5.- It will surprise you every day

Since I started using Make there is no day when I realize cleaver features that were embedded in it that can be used to automate any process. Sometimes when I realize some feature will be very useful I just look into the manual and is already implemented for me to use! No other language I ever used achieved that level of convenience for me before.

In conclusion, Make for me is a really powerful tool that has been a bit undervalued by young developers (I think due to its apparent initial time investment). But when you know it, you’ll see that rapidly pays off the investment.

Make is a beautiful tool.

Shellcheck in Jenkins

All we’ve written Shell scripts. And at the speed we need to deploy is easy to make mistakes (beginner mistakes). So I decided to integrate Shellcheck in Jenkins so that when a commit is done the CI drops a complete shell check analysis of the code.

A test in your CI could be something like:

grep -rIl '^#![[:blank:]]*/bin/\(bash\|sh\|zsh\)' \
     --exclude-dir=.git --exclude=*.sw? \
     | xargs shellcheck

The exclude switches are useful for git and vim users respectively.

Is strongly advised to check for the availability of shellcheck in your build agent before the test. I use to wrap my test code with some kind of check like so:

if which shellcheck ; then
  # Do your tests here

There are other nice ways of performing automatic tests using make as your procedure executor. I will talk about that in later posts.

Happy testing!

Edit: A nice snippet for your Makefiles could be like so.

Integrating npm jsonlint in your CI

Today I received a pull request on a JSON file where the indentation looked just wrong. We decided to look for a linting solution to run on our CI with some requirements:

  • It should validate and point out broken JSON.
  • It should fix the indentation.
  • It should not sort JSON objects.

After some research, we found different options with different levels of quality and liveliness. Actually, the most difficult part of looking for solutions sometimes is strongly influenced by those 2 factors. But I wouldn’t enter in a comparison today, so we finally decided to go for npm jsonlint.

Is easy to install:

npm install -g jsonlint

And easy to use:

Usage: jsonlint [file] [options]

file     file to parse; otherwise uses stdin

   -v, --version            print version and exit
   -s, --sort-keys          sort object keys
   -i, --in-place           overwrite the file
   -t CHAR, --indent CHAR   character(s) to use for indentation  [  ]
   -c, --compact            compact error display
   -V, --validate           a JSON schema to use for validation
   -e, --environment        which specification of JSON Schema the validation file uses  [json-schema-draft-03]
   -q, --quiet              do not print the parsed json to STDOUT  [false]
   -p, --pretty-print       force pretty printing even if invalid

Test integration

In our case we basically deploy a set of Makefiles with every project where we define some standard targets like release, test, or fix so basically I decided to add another linting check to the test target like so:

test: json.validate
    @for f in $(find . -type f -name "*.json"); do \
      jsonlint $f -p | sed -e '$a\' | diff $f -; \
      if [ $? -ne 0 ]; then \
        echo "Error validating $f." ; \
        exit 1 ; \
      fi \

The for loop iterates through all the JSON files found doing a diff using jsonlint with the pretty-print flag.

      jsonlint $f -p | sed -e '$a\' | diff $f -;

Note the use of sed to append a new line to the stream so that diff does not throw an error “No newline at end of file”.

After the target is set in the Makefile everything is done as in our Jenkinsfile we have something like this already:

    stage("Test") {
      steps {
        sh 'make test'

Fix integration

As done for the testing the fixed target does exactly the same operation but applies the changes to the files directly. This target is going to be executed by developers before creating a Pull request as a cleanup step so is not critical to be fast so some more checks are executed to make it clean.

fix: json.fix
    @for f in $(find . -type f -name "*.json"); do \
      jsonlint $f -q && jsonlint $f -qpi && sed -i -e '$a\' $f ; \
      echo "Formatted $f" ; \
      if [ $? -ne 0 ]; then \
        echo "Error validating $f" ; \
        exit 1 ; \
      fi \

The fixing process first validates the JSON in order to avoid errors and also minimize the output on the CI. If jsonlint fails validation with the pretty-print option this will output all the files to stdout. That is why you see jsonlint called twice.

      jsonlint $f -q && jsonlint $f -qpi && sed -i -e '$a\' $f 

Note the ‘-i’ flag to edit the file in place. Also, note the usage of sed to append the new line to the file so that after this automatic fix the file is guaranteed to pass the validation step.