Press "Enter" to skip to content

Category: DevOps

Integrating npm jsonlint in your CI

Today I received a pull request on a JSON file where the indentation looked just wrong. We decided to look for a linting solution to run on our CI with some requirements:

  • It should validate and point out broken JSON.
  • It should fix indentation.
  • It should not sort JSON objects.

After some research we found different options with different levels of quality and livelyness. Actually the most difficult part of looking for solutions sometimes is strongly influenced by those 2 factors. But I wouldn’t enter in a comparison today, so we finally decided to go for npm jsonlint.

Is easy to install:

npm install -g jsonlint

And easy to use:

Usage: jsonlint [file] [options]

file     file to parse; otherwise uses stdin

   -v, --version            print version and exit
   -s, --sort-keys          sort object keys
   -i, --in-place           overwrite the file
   -t CHAR, --indent CHAR   character(s) to use for indentation  [  ]
   -c, --compact            compact error display
   -V, --validate           a JSON schema to use for validation
   -e, --environment        which specification of JSON Schema the validation file uses  [json-schema-draft-03]
   -q, --quiet              do not print the parsed json to STDOUT  [false]
   -p, --pretty-print       force pretty printing even if invalid

Test integration

In our case we basically deploy a set of Makefiles with every project where we define some standard targets like release, test or fix so basically I decided to add another linting check to the test target like so:

test: json.validate
    @for f in $$(find . -type f -name "*.json"); do \
      jsonlint $$f -p | sed -e '$$a\' | diff $$f -; \
      if [ $$? -ne 0 ]; then \
        echo "Error validating $$f." ; \
        exit 1 ; \
      fi \

The for loop iterates through all the json files found doing a diff using jsonlint with the pretty-print flag.

      jsonlint $$f -p | sed -e '$$a\' | diff $$f -;

Note the use of sed to append a new line to the stream so that diff does not throw an error “No newline at end of file”.

After the target is set in the Makefile everything is done as in our Jenkinsfile we have someting like this already:

    stage("Test") {
      steps {
        sh 'make test'

Fix integration

As done for the testing the fix target does exactly the same operation but applying the changes to the files directly. This target is going to be executed by developers before creating a Pull request as a cleanup step so is not critical to be fast so some more checks are executed to make it clean.

fix: json.fix
    @for f in $$(find . -type f -name "*.json"); do \
      jsonlint $$f -q && jsonlint $$f -qpi && sed -i -e '$$a\' $$f ; \
      echo "Formatted $$f" ; \
      if [ $$? -ne 0 ]; then \
        echo "Error validating $$f" ; \
        exit 1 ; \
      fi \

The fixing process first validates the JSON in order to avoid errors and also to minimize the output on the CI. If jsonlint fails validation with the pretty-print option this will output all the file to stdout. That is why you see jsonlint called twice.

      jsonlint $$f -q && jsonlint $$f -qpi && sed -i -e '$$a\' $$f 

Note the ‘-i’ flag to edit the file in place. Also note the usage of sed to append the new line to the file so that after a this automatic fix the file is guaranteed to pass the validation step.

Validating Jenkinsfile

I had many problems with Jenkinsfiles lately. The syntax is so picky that after a push my plan fails just because I forgot a comma or something as ridiculous as that. In my opinion there are better human input methods for such a file, like yaml. But hey! I’m not developing Jenkins.

Anyway I just integrated a target into my project makefile to test the validity of the Jenkins file. The only way in this case is to send it to your Jenkins server in order to test it like so:

test: jenkins.validate
    @curl -X POST -F "jenkinsfile=<Jenkinsfile" \
    @echo "Jenkinsfile tested."

Is super nice to use the REST API to validate your files but still this has some problems to consider: security (posting to http) or weight of the solution (you need to have a complete Jenkins server in your local host). But that’s not in the scope of this entry.

Chef and Docker for a rapid infrastructure development

We started using Chef a while ago and one of the first steps we took was to use Docker instead of Vagrant for performing tests due to it’s faster setup.
After all this time I can say it was a nice experience and now our CI is happily testing our in minutes. So…

What do you need?

Basically you need to install the latest ChefDK where the gem kitchen-dokken is installed by default. This gem enables a light-weight tooling to use Docker containers for executing Kitchen.


After that you just need to setup your kitchen.yml to use dokken as driver like so:

  name: dokken
  chef_version: latest

  name: dokken

name: dokken


The transport and the provisioner are set to dokken so kitchen will use the lighter tooling from the driver. Then you can setup your platform to test your cookbooks:

  - name: ubuntu-16.04
      image: ubuntu:16.04
      pid_one_command: /bin/systemd
        - RUN /usr/bin/apt-get update


  • Docker is designed for isolating and packaging processes that runs as the only process inside a container.
    If your cookbook setup services you may need to choose a complete Docker base image that normally is bigger in size, also you need to explicitly start the system daemon (such as systemd) like you saw on the code snippet earlyer.

  • Do not try to run Docker inside a container. If your cookbook uses Docker somehow is better to use Vagrant instead because then you may need to manually setup the container to host docker and that is a pain.