Building small Docker images for Crystal apps
Crystal is a pretty great new language, combining a lot of interesting ideas in just the right combination. We have been using it for some small but high-throughput microservices at work and it has been performing extremely well. One mildly unintuitive (and not that well documented) part was how the static linking actually works and how it can be incorporated into a docker image. In this post I will document an easy way to compile almost any app into a smallish docker image.
Compilation stage
Every docker version released since about 2017 has supported multi stage builds, which (amongst other things) allows compilation to happen in a separate image from the final “release” image. This makes it possible to easily ship images without a compiler toolchain in them without having to resort to multiline RUN
trickery. This makes a lot of difference: the build
stage is over 500 MB in size while the final image built with this method will often be smaller than 20 MB, depending on the size of your application.
Since static linking for Crystal applications only works on Alpine linux, we will have to use something Alpine-based to start from. You could build something yourself, but there are also some pre-built images that have already installed the Crystal compiler for you. For this post we’ll use the one built by Durosoft:
FROM durosoft/crystal-alpine:0.28.0 as build
Next, we can install any shards we need. While we could fetch shards and compile the app in one go with shards build --static
, this would prevent Docker from caching the installed shards. So, we install the shards first and will compile later in a separate layer:
COPY shard.yml shard.yml
RUN shards install --production
Now that all the dependencies are in place, we can copy in the rest of the project and compile. You will want at least the --static
flag to get a statically linked image, but in almost every case you will also want --release
to enable optimisations. This is for a hypothetical application named my_app
, but if your application is not named like that you can just switch out all instances of my_app
for whatever your application is named.
COPY . ./
RUN crystal build src/my_app.cr --release --static
Release stage
Now that we have a compiled app, we can have a look at what we need to run it. Confusingly, static linking does NOT mean that there are no more dependencies, especially for apps larger than “Hello World”. This blog post from Manas documents a way to copy over exactly the required .so
files from the build image.
An easier way is to accept that building from scratch
or busybox
images is often not what you want anyway, since they are a little bit TOO minimal. For example, most applications that do outgoing HTTPS requests will need some form of certificates to set up the connection. It is also very often useful to have simple debugging tools available in the container. To gain access to these, we’ll just use a basic Alpine image as the base for our release stage:
FROM alpine:3.10
COPY --from=0 my_app my_app
EXPOSE 3000 # Or whichever other port you want exposed, if any
You can then run the resulting image with docker run
or pass it into some container orchestrator like Kubernetes or Nomad.
Conclusion
It is pretty straightforward to build a small image for Crystal apps, but it really needs multistage builds so that you do not have to include the entire compiler toolchain into the image. As with almost all minimal images, you probably do not want to get to the most barebones image possible, since that makes “real world” concerns like including certificates and debugging really difficult. Compiling with --static
and then copying over the resulting binary into a minimal Alpine linux image is a great compromise between image size and power.
The complete code can be found here as a gist.