In Technical

This is the first in a series of upcoming posts about deploying and monitoring an Erlang cluster in Kubernetes. To start, this piece describes how to build an Erlang container for the project that will be deployed to Kubernetes, and will touch on a couple features that Docker and Erlang have recently added to ease development and operations.

The project can be found on github. It is an implementation of the Presence Service sample found in Microsoft Orleans. How the service works is outside the scope of this article, the important points to know are that it is for tracking the state of a multi-player game and that our version of the service includes an HTTP interface accepting a JSON heartbeat payload.

To create a Docker image that boots to our Erlang service we first need a release, a collection of the Erlang applications and a boot script. The presence release is built with rebar3 and comes with a Dockerfile which can be repurposed for any Erlang project. But first, it is important to note the file .dockerignore:

# files and directories to exclude from context

This tells Docker to ignore the rebar3 build directory _build from its context. This is important if you are building for a different platform than you are developing on.

The Dockerfile utilizes a feature new in Docker 17.05 called multi-stage builds. Multi-stage builds allow for the use of multiple FROM instructions in a Dockerfile, each with its own base which can copy artifacts from previous stages. this means the same Dockerfile can compile our code and assemble the release before having just the release copied into a final stage with a fresh base.

FROM erlang:19.3.3 as builder

WORKDIR /usr/src/app
COPY . /usr/src/app
RUN rebar3 as prod tar

RUN mkdir -p /opt/rel
RUN tar -zxvf /usr/src/app/_build/prod/rel/*/*.tar.gz -C /opt/rel

FROM ubuntu:16.04

WORKDIR /opt/presence


<set default ENV variables here>

COPY --from=builder /opt/rel /opt/presence

EXPOSE 8080 8080

ENTRYPOINT ["/opt/presence/bin/presence"]

The first stage uses the erlang:19.3.3 image and is named builder. In the builder stage the project’s context is copied to /usr/src/app and the Erlang release tarball is built under the prod profile with rebar3 as prod tar. The tarball is then extracted to the directory /opt/rel. The next stage is based on ubuntu:16.04 and uses the COPY --from=builderinstruction to copy the extracted release from the first stage over to /opt/presence. Additionally the image defines RELX_REPLACE_OS_VARS and some default environment variables. This is required for our production config files, sys_prod.config and vm_prod.args, which use environment variables for runtime configuration, making it simply to change configuration values when launching in Kubernetes or with Docker manually.

Now, building an image with only the necessary Erlang applications for running the release is as easy as: docker build -t presence .. And running a container, assuming Postgres is available locally with default user and database use in Dockerfile, can be done with: docker run --network=host -ti presence console. By default the service’s HTTP interface is exposed on port 8080, so we can check that it is working by running:

curl -v -XPOST localhost:8080/heartbeat \
        -d '{"game":"game-1","status":{"players":["player-1","player-2"], "score":"100"}}'

It is important to note that with Erlang 19.3 or newer, a running release will actually play nicely with Docker and container orchestration systems like Kubernetes. As of 19.3 when an Erlang node receives the SIGTERM signal it will run init:stop(), shutting down your applications as expected, allowing for any necessary cleanup. So stopping the Presence container started in the last section with: docker stop <container id> will properly stop the running release, which we’ll see is very useful for Kubernetes rolling deployments. Prior to 19.3 the signal would not be handled, resulting in abrupt termination when the OS kills the process itself.

This Docker configuration can be reused for any Erlang project. In upcoming posts we’ll see how to create a production ready Helm chart for deploying and monitoring, with Prometheus and Grafana, Erlang clusters to Kubernetes.

Recommended Posts
Showing 2 comments
pingbacks / trackbacks

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

drill machine, hammer, mechanic tools in the workshopdata center drives