In this article we will explore how we at blogfoster set up our development environment with Docker. If you're already familiar with Docker you can jump directly to our setup and skip this short overview.


Use one shared Docker network and unique container names.


If you haven't heard about Docker yet or you're not familiar with its concepts I would recommend the amazing Docker docs and read a few tutorials online.

Docker provides a way to run applications securely isolated in a container, packaged with all its dependencies and libraries. []

So what is Docker? It helps you to prepare a reproducible encapsulated environment for your application where only those dependencies exist, that are necessary to start. For Node.js applications this would mean the node executable, your source code and your npm dependencies (and maybe some c/c++ tooling).

Why do we need it? Have you ever experienced "it works on my machine"? You tested it locally, all tests pass but on your colleagues machine it's failing (or worse, it's failing in production)? With Docker we can create an isolated environment which is reproducible on other machines (though of course, you cannot run the node x86 executable on an ARM machine).

In addition to creating an encapsulated environment, Docker can also create images of this environment. Imagine you download node, install your npm dependencies and then, together with your source code create a tarball (or zip archive). This image can then be shipped to any other machine and started there. This saves the overhead of e.g. running npm install on other machines, which is why it's way faster. Also, Docker uses a layered file system for its images, which means only the parts which change will be sent over the network when the image is updated.


OK we heard about some basic functions of Docker, but what do we need to do if our service needs a database? To solve this problem, we can use docker-compose. It can easily orchestrate many services using configuration files. All common things you can do with Docker through the command line can be configured in a docker-compose.yml configuration file.

The blogfoster Setup

We at blogfoster love to write JavaScript, so all our backend services are Node.js applications and for our frontend applications we're using React. We're running a couple of micro-services in production. Some of them need to talk to each other, all of them have at least one database connection and others need to communicate with external services.

So imagine you're working on a new feature in the front-end, which involves interactions with multiple services, their databases and even the external service? Should we start all the services on the local machine, but how do we install databases without messing with the local system?

Thinking back a few years (and still today), people were using Vagrant with VirtualBox to spawn independent machines per services. Each service had it's own instance, each of these instances were provisioned with e.g. Chef-Solo. And starting this setup from scratch easily took more than 20 minutes. This was just too long for "Generation Internet", which loses focus after 5 minutes, so I actually forgot what I wanted to do even before the initial setup finished :D.

When it comes to setup speed, Docker is just amazing. For sure it does not have the capabilities Chef has at all! There is no ruby DSL, no nothing: just pure shell commands. Also it's not as encapsulated as a virtual machine. But it's so fast. No really, it's amazing! Currently spawning all the services in Docker takes me no longer than 2 minutes of which it takes me about 45 seconds just to open all terminals and run the correct Docker commands.

How we organize code

Before we look at some code examples I'd like to explain how we organize our code. All of our services have their own git / GitHub repository.

Why does it matter? When we started setting up our dev environment, we of course searched to find solutions someone else described, but splitting up your code-base unfortunately makes it a little harder to connect all your services. At first glance this seems strange, as Docker was built for micro-services, but since docker-compose was the go-to tool for local orchestration, how should one docker-compose file know of your other services? Each repository has it's own separate docker-compose file to orchestrate databases, but you couldn't easily link a service that was described in a different file of which you didn't know the exact location. Some of the solutions proposed to create a top-level docker-compose file which then knows all the other services but this just looked awkward. So we came up with our own solution.

The setup with one shared Docker Network

The final solution was to use one shared network. That sounds simple and it is, but finding good examples was hard, so I hope this article helps spread the word.

OK, so how do we get there? Let's recap what we need in our development environment.

  • Whenever I change my code it should be reflected in the dev environment
  • It should be convenient
  • Other environments should be as close as possible to the same setup, so deploying to production has less risks

Unfortunately, I'm not going to tackle the last point in this article but the others I will.

The first point (that code changes should be reflected) is quite easy when using Volumes. The following code snippet is a Dockerfile for the imaginary web service iron:

FROM node:6

RUN mkdir -p /opt/iron
WORKDIR /opt/iron
VOLUME /opt/iron

COPY ./docker/ /

CMD [ "npm", "start" ]


Here we're declaring node:6 as our base image, marking /opt/iron as volume and defining an entrypoint script.

We're using the script to install npm dependencies whenever a Docker container is spawned. This helps to always keep your dependencies up to date when multiple people work on the same project.

The following code shows the entrypoint script we use:

#!/usr/bin/env bash

set -e
set -o pipefail

# calculate the md5 sum of the package.json and save it in the node_module directory
function calc_package_md5 {
  md5sum ./package.json  | awk '{print $1}' > ./node_modules/package_json_md5

# install npm dependencies
function npmi {
  npm prune
  npm install
  npm ddp # npm dedupe - flatten node_modules hierarchy

# install / update dependencies only if necessary
function prepare {
  # ok, is there a node_modules folder?
  if [[ ! -d './node_modules' ]]; then

  # ok, node_modules folder there, but is there an old package_json_md5 file?
  if [[ ! -f './node_modules/package_json_md5' ]]; then

  # ok all is there, but did the package json update?
  if [[ "$(md5sum ./package.json  | awk '{print $1}')" != "$(cat ./node_modules/package_json_md5)" ]]; then

# install / update dependencies if necessary
# run the actual command given
# - use double quotes to prevent splitting of arguments with spaces
exec "$@"

As you can see we're using a simple "caching" technique here. Remember that we use a volume? This keeps our node_modules folder persistent across deletion of the Docker container, but to gain more speed improvements we don't even call the npm executable, if the package.json didn't change between two runs of this script.

Next, for managing docker containers, we're using docker-compose and the following code shows an example docker-compose.yml file for the iron service:

version: "3"

    # uses: node:6
    build: .
      - .:/opt/iron
      - 8080:8080
      - NODE_ENV=development
      - PORT=8080
      - MYSQL_HOST=iron.mysql.blogfoster.local
      - MYSQL_USER=root
      - MYSQL_PASSWORD=root
      - REDIS_HOST=iron.redis.blogfoster.local
      - REDIS_PORT=6379
      - AURUM_URL=http://aurum.api.blogfoster.local:8084
      - AURUM_TOKEN=supersecuretoken
    command: "true"
      - mysql
      - redis
    container_name: iron.api.blogfoster.local

    # uses mysql:5.6
    build: ./docker/mysql
      - 3380:3306
      - mysql-data:/var/lib/mysql
    container_name: iron.mysql.blogfoster.local

    image: redis:2.8
      - 6380:6379
    container_name: iron.redis.blogfoster.local


      name: blogfoster

As you can see we're declaring a node service (iron), forwarding the port 8080 and are giving the service some environment variables. The depends_on attribute tells docker-compose to also start other services if this service was started. This is actually not necessary when calling docker-compose up -d but you need it when spawning a one-off container with docker-compose run. One more interesting fact: we're using true as the container’s default command, which is a noop command, that exits immediately. We're doing this to be able to call docker-compose up -d which also spawns the databases but will not run our service, so we start it separately later.

Next to the iron service we define a mysql and redis service, one volume (for mysql only, as we don't need peristent redis data), which is used with mysql, so stopping the mysql container does not delete your data. For convenience we forward the mysql and redis port ending on xx80, as the node service.

Another thing to note here is the networks section. Here we're telling docker-compose to start all the mentioned services in the given blogfoster network. By default docker-compose itself will make sure there are no naming conflicts for multiple containers in the same network.

Finally we're using defined, unique container_names. These names can now be used as DNS names to access another service. Check the environment section and you'll see that we tell our application to access redis through REDIS_HOST=iron.redis.blogfoster.local. We're also passing AURUM_URL=http://aurum.api.blogfoster.local:8084 - Think of this as another service started independently from another terminal. But since it's running in the same network and has a well known, unique name, we can now access this service from within the iron container.

The last script I want to show you now is a small script that creates the default network:

#!/usr/bin/env bash

set -e
set -o pipefail


if [[ -z "$(docker network ls | grep "${DEFAULT_DOCKER_NETWORK}")" ]]; then
  docker network create "${DEFAULT_DOCKER_NETWORK}"

As you see this is a small one. It just creates the network if it doesn’t exist already.

Remembering the docker-compose commands can become tricky over time. To simplify our lives we're using npm scripts:

  "scripts": {
    "d:build": "./docker/ && \
                docker-compose build && \
                docker-compose run --rm iron 'true' && \
                docker-compose up -d && \
                npm run d:prepare && \
                docker-compose rm -fv iron",
    "d:prepare": "npm run d:prepare-mysql",
    "d:prepare-mysql": "docker-compose run --rm mysql '/docker/mysql/'",
    "d:login": "docker-compose run --rm --service-ports --name iron.api.blogfoster.local iron /bin/bash",
    "d:clean": "docker-compose stop && docker-compose rm -fv",
    "d:cleanDb": "docker volume rm iron_mysql-data || true"

To create a prepared environment you only need to type npm run d:build. As you see, this calls the network script, then calls docker-compose build, which builds your initial docker image (for us this means downloading the base image and marking the code directory as a volume). The next command might look strange: docker-compose run --rm iron 'true'. This will spin off a one-time iron container that should execute the : command. As mentioned before, the true command is a noop command. The real thing that now happens is that the entrypoint script will be executed. You might remember: we use the script to install npm dependencies, so we now only update our npm dependencies in an active shell, so the developer can see whats going on. The next command docker-compose up -d should spawn all other servies defined in the docker-compose.yml file (databases, etc.). The command npm run d:prepare can be used to prepare initial database fixtures, but it's not necessary if not needed. Last but not least, docker-compose rm -fv iron removes all exited docker containers which are no longer needed.

The next npm script is npm run d:login. Although this sounds like logging into some running machine it's not. We chose this name on purpose, but what really happens is that we start a new interactive docker container with the /bin/bash command. This gives us the feeling of logging into something. From this bash session you can now just start your real project using node . or npm start.

To clean up your setup we define two more commands. The npm run d:clean command is used to stop all of your containers and removes them. This will leave named volumes untouched, so any changes to your database will remain.

To clean up the database we also defined npm run d:cleanDb.


Wow, this was a long journey. We learned a bit about Docker and docker-compose. We saw that docker containers spawned with docker-compose can easily communicate with each other if they are spawned in the same network and are given unique DNS names. Additionally, I showed you the scripts we use here at blogfoster. I hope you could follow my thoughts and and it wasn't too confusing. If you liked the article tweet about it! If you have any questions feel free to send me an email at I would love to hear your feedback.



  • changed entrypoint script to use exec so that the executed command runs as PID 1, thanks to @puneeth_mysore