Onboard Your Devs Onto Your Distributed Application in Minutes With Docker, Part 1

Written by Marc Andrews

Onboard Your Devs Onto Your Distributed Application in Minutes With Docker, Part 1

Many of today's web applications are composed of multiple layers. At minimum, there are two: a front-end presentation layer, that usually runs client side, and a back-end layer, that provides the logic and data. It's common to have additional layers too, such as a database layer. For example, PostgreSQL or MongoDB provide long-term data storage, and a cache layer, for example Redis, for short-term, rapid-access data storage. It is even possible to have multiple back-end layers each with their own concerns. For example, one layer responsible for RESTful requests, another for Websockets, and another for computation-intensive requests.

These layers can be considered services, and together, they create your distributed application. But each service may have its own requirements, from different Node versions to conflicting library dependencies, and when you add in the fact that every developer's machine and environment is different, it can be a challenge to onboard new developers quickly. Wouldn't it be nice if we could onboard developers and get our application running consistently and predictably with a few simple commands?

Enter Docker

Docker is a container platform. Unlike virtual machines, containers bundle only the libraries and settings required for your application. They are efficient, lightweight and self-contained, ensuring that your application will run the same anywhere and everywhere.

Docker bundles your application into an image, a stand-alone, executable package. Docker executes this image in a container, and the environment inside the container is isolated from that of the host machine.

We will use Docker to build a container with a suitable environment in which our application's services can run. Then, we will use Docker Compose to orchestrate each service so that we can onboard developers in minutes and get our application running with a single command.

Prerequisites

  • Install Docker
  • If on Linux, also install Docker Compose (with Docker for Mac and Docker for Windows, it is included with the above)
  • Familiarity with:
    • running commands in a terminal and bash
    • git
    • creating and editing YML configuration files
  • Familiarity with Express, React and webpack/webpack-dev-server will also be helpful.

Our Sample, Distributed Application

The associated repository includes a sample application that is composed of a Node/Express backend API that serves a JSON array of colors from a /colors endpoint, and a React frontend that retrieves the list of colors from the backend and displays them. Since we're developing, we will use nodemon on port 3000 and webpack-dev-server on port 8080 to serve the backend API and frontend applications, respectively. Clone the repository to your local machine:

$ git clone https://github.com/marcandrews/blogs-onboard-your-devs-with-docker.git

Creating Our First Container

Docker builds images by reading instructions from a Dockerfile. In the root of our cloned repository, create a file with the name Dockerfile and add the following:

FROM ubuntu:16.04

CMD ["/bin/bash"]  

The FROM instruction tells Docker to use the Ubuntu image from Docker Hub, the official Docker image repository, as a parent image. The CMD instruction, whose importance will be expanded upon later, provides defaults for an executing container.

But wait! We have a Node application; why don't we use a Node image as parent image? We could, but since we are still getting acquainted with Docker, using Ubuntu will provide a familiar environment when developing locally.

Between these two instructions, FROM and CMD, we will extend this image to make a suitable environment for our application; but before we do, let's familiarize ourselves with several important Docker CLI commands.

Docker CLI

The core of Docker CLI revolves around three commands: docker build, docker run and docker exec. Using the Dockerfile created previously, let's see how to use each of these commands to build an image and run it in a container.

docker build

docker build is used to build an image from a Dockerfile. Open a shell to our newly created directory containing our Dockerfile, and let's build our first container image:

$ docker build -t my-first-container .
Sending build context to Docker daemon  
...
Successfully built $HASH  

This creates a Docker image with the tag (-t) my-first-container using the Dockerfile in the current directory (indicated with the .). Notice how Docker processes each instruction in our Dockerfile, downloading the required image(s), in our case Ubuntu 16.04, and caching each step; if you build this image again, it will build almost instantly because, instead of downloading Ubuntu again, Docker will use the cached image.

docker run

With our container image built, we can now run our container:

$ docker run -it my-first-container
root@07b0a5c8a53b:/#  

This will allocate a pseudo-TTY (-it), and because we specified CMD ["/bin/bash"] in our Dockerfile, we are now in a terminal within our Ubuntu container. Each time a container is run, Docker assigns a new image hash that you can determine by looking at what's after the root@. To confirm that, in fact, we are running Ubuntu. Try the following:

root@07b0a5c8a53b:/# cat /etc/lsb-release  
DISTRIB_ID=Ubuntu  
DISTRIB_RELEASE=16.04  
DISTRIB_CODENAME=xenial  
DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"  

You can also list all currently installed packages:

root@07b0a5c8a53b:/# apt list --installed  
Listing... Done  
adduser/now 3.113+nmu3ubuntu4 all [installed,local]  
apt/now 1.2.20 amd64 [installed,local]  
base-files/now 9.4ubuntu4.4 amd64 [installed,local]  
base-passwd/now 3.5.39 amd64 [installed,local]  
bash/now 4.3-14ubuntu1.2 amd64 [installed,local]  
...

The Ubuntu image Docker uses as our parent image only includes the bare minimum. We can't do much in our container now, but soon we will extend this container and make a suitable environment in which our application can run.

Exit out of our container and return to your host machine with:

root@07b0a5c8a53b:/# exit  
$

Running Containers in the Background

We can also run our container in the background detached (-d) from our current shell with:

$ docker run -it -d my-first-container
fd4d2604ba9d...  

Then, to view a list of running containers:

$ docker ps
CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS               NAMES  
fd4d2604ba9d        our-first-container   "/bin/bash"         9 seconds ago       Up 7 seconds                            random_name  

But wait! How do we get back into this container executing in the background?

docker exec

With the container ID hash listed in $ docker ps, we access, or bash into, the container running in the background with:

$ docker exec -it fd4d2604ba9d bash
root@fd4d2604ba9d:/#  

If we exit from this container, and run $ docker ps, our container will still be running in the background. To stop this running container with:

$ docker stop fd4d2604ba9d

Confirm that the container is stopped by running $ docker ps and seeing that my-first-container is no longer listed.

Preparing Our Image for Our Application

Now that we are familiar with Docker CLI, we can begin creating a suitable environment in which our application will run. This is accomplished by adding additional instructions between the FROM and CMD instructions in our Dockerfile. Our first step will be to instruct Docker to update Ubuntu's source list and install any application dependencies. We do this with a RUN instruction:

FROM ubuntu:16.04

# Update source list
RUN apt-get update && apt-get install -y -qq --no-install-recommends \  
  # Install dependencies
  software-properties-common \
  libssl-dev \
  build-essential \
  curl \
  wget \
  git \

  # Clean up
  && apt-get clean && rm -rf /var/lib/apt/lists/*

CMD ["/bin/bash"]  

If you have experience with bash, this will look familiar; however, there are three important concepts to note about how Docker executes RUN instructions:
1. each instruction runs "on-top" the previous instruction, akin to opening up a new shell
2. each instruction is cached
3. some things, like environment variables defined with export do not get passed from the previous instruction

This is a good starting point, but your application may require additional dependencies - maybe Java (default-jre) or imagemagick (imagemagick) - feel free to add them here.

Next, we will use n to install Node and any global dependencies we may need:

FROM ubuntu:16.04

# Update source list
RUN apt-get update && apt-get install -y -qq --no-install-recommends \  
  # Install dependencies
  software-properties-common \
  libssl-dev \
  build-essential \
  software-properties-common
  curl \
  wget \
  git \

  # Clean up
  && apt-get clean && rm -rf /var/lib/apt/lists/*

# Install Node
RUN dir=`mktemp -d` \  
  && git clone https://github.com/tj/n.git $dir \
  && cd $dir \
  && make install \
  && n 6 \
  && npm i -g nodemon

CMD ["/bin/bash"]  

Here, we install Node 6 and nodemon as a global dependency. Note how we chain the Node installation process into a single RUN instruction. If you separate the installation into multiple RUN instructions, you cannot guarantee that information, such as environment variables, which is set in a previous instruction, will be available in subsequent instructions. For this reason, components should be installed via a single RUN instruction.

Let's go ahead and build this container:

$ docker build -t my-first-container .
Sending build context to Docker daemon  
...
Successfully built $HASH  

Now, let's run the container we just built and confirm that our dependencies are present:

$ docker run -it my-first-container
root@9d5a58262544:/# git --version  
git version 2.7.4  
root@9d5a58262544:/# n --version  
2.1.8  
root@9d5a58262544:/# node --version  
v6.11.1  
root@9d5a58262544:/# exit  
$

You can find a complete, working example here.

Conclusion

We have learned about what Docker is, how it is different from virtual machines, and some associated terminology. We created a Dockerfile and learned how to work with it using docker build, run and exec. We also extended our Dockerfile, creating a suitable environment in which our application's services. In Part 2, we will use Docker Compose to orchestrate our application's services during development, and run our distributed application with a single command.