Chuniversiteit logomarkChuniversiteit.nl
Living the Pipe Dream

Access your Docker Compose services via easy-to-remember names

Did you know you can make your Docker web app containers available via simple names instead of hard-to-remember port numbers?

An inferior brown bear assaults a Cylon
Bears beat Battlestar Galactica

Docker Compose makes it possible to spin up an entire development environment on your local machine with just a single command, even if that environment consists of a dozen different frontends, backends, and backing services.

As a developer, it’s not enough that the system runs – you also need some way to interact with it. The easiest way to do this is by exposing container ports, so you can, for example, access your web application at http://localhost:8080 or your Elasticsearch instance at http://localhost:9200.

Here’s a public service announcement: it doesn’t have to be this way. You can (ask AI to) use domain names that actually make sense, even for local development.

The problem

Link

First, let me illustrate what problem we are trying to solve here. To keep it simple, I will use hashicorp/http-echo instead of a real web app. http-echo is a tiny web server that can be configured to return a static string.

In the compose.yaml below, I have configured http-echo to return the string Bears and forwarded port 5678 from the host, since that is what http-echo listens on by default:

While this Compose project is running, opening http://localhost:5678 in your browser should show a plain web page containing the text Bears.

A typical web development project is going to have more than just one container running though. In addition to a frontend, you might have an API or something like Storybook.

Let’s add a second http-echo instance to our Compose project. This time, we’ll make it return the string Beets. Because port 5678 is already taken by the bears container, we’ll “randomly” choose another port number. In this case, I went for 5679.

Now, we have two web servers running on ports 5678 and 5679, respectively returning the strings Bears and Beets.

, but it suffers from a number of issues:

  1. This forces you to use port numbers, but most people aren't good at remembering number sequences, especially if they are longer (e.g. 15672, 27017, 61616) or look similar to each other (80, 8080, 8081, 8888, etc.).
  2. All web applications are served directly on localhost. There’s nothing in the web address that tells you (or the browser) anything about what is being served. In fact, the same port might be reused across projects, potentially causing hard-to-debug issues with cookies, among other things.

It would be nice if these web applications could be made available via addresses that are easier to remember and actually have some relation to the apps they refer to.

A basic solution

Link

This is a pretty common problem, and consequently, there are several common ways to solve it.

Most involve some kind of reverse proxy, where web applications are no longer exposed directly but only through an “entrypoint” server that forwards requests to the right web application, often based on (sub)domain and/or request path.

The compose.yaml below uses nginxproxy/nginx-proxy as a reverse proxy. This container handles all requests for port 80 (the default HTTP port), which includes not only localhost, but importantly also everything under it, such as example.localhost and longer.example.localhost.

To make the proxy forward requests to the right container, simply add a VIRTUAL_HOST variable containing the desired domain name to its environment.

Now, you can simply go to http://bears.localhost and http://beets.localhost to see the texts Bears and Beets respectively.

Good to know

Link

Admittedly, the example above is a bit contrived and your requirements may differ from mine. Here are a few things you might need.

Path-based routing

Link

With nginxproxy/nginx-proxy, multiple containers can share the same hostname provided that they listen to different paths, which you can set using the VIRTUAL_PATH environment variable:

Large HTTP requests

Link

Out of the box, nginxproxy/nginx-proxy only allows HTTP requests up to 1MB in size. If you need to support file uploads or other types of large request payloads, you will need to manually override nginx’s client_max_body_size.

The idiomatic way to do this is by creating a custom configuration file that sets the client_max_body_size to your desired value (or disables the limit entirely) and mounting it in the container:

Alternatively, you can simply override the entrypoint so that the config file is created dynamically during startup:

Non-localhost domains

Link

All examples up to this point have used *.localhost addresses. You might be tempted to use some other domain, but localhost has a few major benefits over many common alternatives:

  • .localhost is a special-use domain name with a specific purpose. It will never .
  • .localhost is automatically resolved to 127.0.0.1, so you don’t have to manually edit your /etc/hosts file.
  • Modern web browsers treat .localhost differently from other domains. You’ll have fewer issues with cookies, WebSockets, and certificates.

I would recommend that you “namespace” your virtual hosts to avoid conflicts with other projects. For example, , use virtual hosts such as foo.contoso.localhost and api.foo.contoso.localhost.

Internal networking

Link

It’s important to keep in mind that the reverse proxy is there to make containers more easily accessible from the host.

Containers within a Docker Compose project cannot reach other containers via the reverse proxy, only via their internal service name (e.g. http://bears).

Because localhost by definition resolves to the local machine, you cannot even make your containers connect to the proxy by modifying your /etc/hosts file, as most HTTP clients will not bother checking it.

If you really, really, really want to use .localhost addresses within your container as well, you can run a reverse proxy within the container that forwards all requests for *.localhost to http://proxy, which will then route the request to the right container:

Multiple projects

Link

Chances are that you have more than one Docker Compose project that you want to run simultaneously. Let’s assume that you have two projects, bears and bsg, each with their own compose.yaml:

We still need a reverse proxy that listens on port 80 and forwards requests to the right containers, but now the containers are part of two different projects that would normally be completely isolated from each other. Moreover, we still want each Compose project to be able to function independently.

The cleanest solution is to move the reverse proxy to its own Compose project. This project should not only declare the proxy service, but also the proxy network:

The compose.yaml files for bears and bsg can be fairly minimal:

You may notice two major issues here:

  1. All three projects now declare a port mapping 80:5678. This allows us to start each of these projects independently from each other and make them accessible on http://localhost without a port number, but not at the same time.
  2. We no longer tell Docker how to wire these projects together, so our human-readable .localhost addresses no longer work.

We can fix both regressions by adding a compose.proxy.yaml file that adds proxy-specific configuration to the bears and bsg projects. The resulting directory structure could look like this:

The compose.proxy.yaml files for bsg and bears do four things:

  1. Declare a dependency on an external Docker network called proxy. If you try to start any of these projects before the proxy project, they will refuse to start.
  2. Define the VIRTUAL_HOST. We could have left this in the original compose.yaml files, but this is a bit cleaner.
  3. The bears and bsg services must be attached to the proxy network. This allows the reverse proxy to find them.
  4. Disable the port mappings defined in the original compose.yaml files, since they are no longer necessary (and would have caused a port conflict anyway).

Setting everything up now requires two stages.

First, start the proxy project. If you don’t, the steps below will fail with the following error message: network proxy declared as external, but could not be found.

Then, start all remaining projects with the -f (--file) option that explicitly tells Docker Compose which configurations it should use:

Now, you should be able to access both http://bears.localhost and http://bsg.localhost via your web browser!

Season to taste

Depending on your use case, you may want to make some modifications to this setup:

  • If you never need to run these projects (locally or in CI) without the proxy, feel free to simplify the setup by merging our custom compose.proxy.yaml into the regular compose.yaml.
  • If you don’t want to commit the compose.proxy.yaml files, you should add them to a .gitignore file.
  • Alternatively, if you rename it to compose.override.yaml, the configuration will be automatically applied on top of the regular compose.yaml file, so you don’t need to use -f anymore when starting projects.