Setting up and maintaining development environments so that they work and keep working as they should can be quite a chore, especially when some part of the environment suddenly changes.
For instance, my Ruby installation is almost guaranteed to break every time I upgrade macOS. Since we use Capistrano to deploy our applications, this effectively means that I can’t start any deployments until I have fixed every single issue that prevents my Ruby and Bundler from working.
Similar things happen with my PHP extensions whenever I use Homebrew to install a new version of PHP on my machine: they won’t work with the new PHP version and since installing extensions is not something you do every day, I always need several trips to Google to figure out how I can reinstall them without compilation or configuration errors.
And what if you need to work on multiple projects that require different versions of a programming language? For Ruby one can use rbenv or rvm, Python has virtualenv, and PHP has… several solutions, but none with widestream adoption.
Docker, which comes with a standardised method to containerise applications, solves virtually all of these problems.
Unfortunately, I do all of my development work on a company-issued 2018 MacBook Pro, which runs macOS, while Docker only works natively in Linux.
This is where Docker Desktop comes in. Docker Desktop is a tool for Windows and macOS that uses virtualised Docker containers that run in a hidden Linux virtual machine. Functionally it’s pretty close to the real thing, although its performance can be somewhat abysmal – especially if your projects contain a very large number of files that need to be synced between your Docker container and your host machine.
It just so happens that PHP projects – especially Composer-based ones – tend to consist of a very large number of files, and all of the company projects I work on are written in PHP. Fortunately the performance issues are fixable to some extent and I have been “happily” using Docker Desktop on my Mac for five years or so now.
However, those who know me in person know that I usually prefer Windows over macOS. I’ve also heard some very good things about WSL2 recently, so this seemed like an excellent opportunity to give the newly improved WSL a try on a fresh Windows installation.
WSL stands for “Windows Subsystem for Linux” and is a way to run Linux as a “subsystem” within your Windows environment. However, when I say “a way” I actually mean two ways.
The first version was released in 2016 and allowed users to run Linux processes directly from Windows using the NT kernel. While it was very innovative and quite performant, it also turned out to be very hard to extend.
Version 2 was released three years later. It takes a more traditional approach with a lightweight virtual machine that runs Linux and thus offers full Linux compatibility, while still retaining most of the integrations with the Windows host.
This is kind of what Docker Desktop for Windows already did, but with WSL2 you now can do your development work in a fully-featured Linux environment, with Bash and all the other . Unlike regular virtual machines, it also starts up insanely quickly and gets access to all of the host system’s files and resources. Oh, and Docker Desktop nowadays uses WSL2 under the hood, so you get the best of both worlds – in theory at least.
To learn more about how well WSL works in practice, I spent a weekend “benchmarking” the test suite of one of our company’s largest backend service, which I’ll call the Monolith here, with various (dumb) WSL setups. I put “benchmarking” between quotes, because I didn’t actually bother measuring anything which is usually the entire point of benchmarking things.
Anyway, the rules are fairly simple:
I’m only interested in the out-of-the-box experience, with default settings and no additional performance tweaks
I’ll use the JetBrains’ PhpStorm IDE to open projects and GitHub Desktop for Windows for version control
For each test, I…
- check out the project’s source code in an empty directory
- start the project using
docker-compose up -d
- install its dependencies via the CLI using
- open the project directory using PhpStorm
- run the entire test suite
First I test a simple setup that is very similar to how one would usually do it in macOS, but also discouraged by virtually every WSL how-to guide on the internet: doing everything – including file operations – from the host.
There are several reasons why this is not a good idea, but the most important reasons are that this greatly limits the speed at which file operations can be done and the fact that you lose some functionality related to files and directories, like setting permissions and automatically reloading your applications whenever you modify a file.
I use GitHub Desktop to clone the repository inside my user folder at
C:\Users\Chun\Monolith. The end result of this process leaves much to be
desired as I now have hundreds of changes due to files that have gotten
Windows-style line endings and/or lost their permission info. Fortunately this
is fairly .
Starting the development environment using
docker-compose up -d goes smoothly,
without any apparent problems. I cannot say the same about running
composer install. The installation process takes 20 minutes, which is about 15 more than
on macOS. PhpStorm also needs quite some time to index the project’s files, but
that’s true on any platform.
I couldn’t get the test suite to run via PhpStorm for some reason, but it worked just fine from the command line. By that I mean that it starts executing the test suite and the assertions work as expected. What is not fine is the speed at which it executes the tests; it needed several minutes per test case, which is insanely slow. After waiting for an entire hour it had only finished a handful tests and still more than 99% to go. On macOS the entire suite would take 8 minutes at most. Needless to say, I didn’t bother waiting until it had run to completion.
The results for our first setup were not exactly stellar, but that was about to be expected as it’s discouraged for very good reasons. Surely, it would work better if we followed best practices? Surprise, surprise: it does.
Once again, I used GitHub Desktop to check out the project’s source code. But
this time I do it within
\\wsl$, a special directory that holds the filesystems
of your WSL distributions. By putting files in this directory we can avoid a lot
of I/O overhead, as everything then happens within WSL.
As GitHub Desktop is a Windows application, the cloning process takes a bit
longer than before and the files still get mangled by Windows, but at least it
doesn’t get worse from here.
After fixing the file permissions I had zero issues setting up the project. Composer installed my project’s dependencies in no time and the Monolith itself also works like a charm.
The rest sadly does not.
PhpStorm needed close to an hour to index all of the project’s files, as it had
to access them through WSL via some sort of network drive interface. Working with
GitHub Desktop was equally disappointing, as it needed about 15 seconds to pick
up changes to files inside
Test suite execution was a little bit quicker than in our first setup, as it managed to complete almost 100 tests in an hour (about 5% of all tests); a major improvement over the handful of test that we saw earlier, but still a massive regression relative to macOS.
Docker Desktop is the official and recommended way to use Docker on Windows. Like
its Mac counterpart, you start it whenever you need Docker. Once it’s ready, you
have access to the
docker-compose commands in your WSL environment.
When you shut down Docker Desktop, the commands are no longer available.
Would our Monolith’s performance improve if we completely ignore the existence of Docker Desktop and just install Docker directly inside WSL? Not according to the internet, but it’s not like that has ever stopped me from doing dumb stuff.
The answer to our question is very simple: no. Everything is just as fast – and
slow – as it was with Docker Desktop. The only major difference is that you now
have to figure out a good way to start Docker, because apparently
commands don’t actually do anything in WSL.
If you are willing to accept how your Linux distribution renders fonts and user interfaces, you can also avoid most of the performance penalties by running your GUI inside WSL.
WSL does not come with an X server, so Linux GUI apps won’t work out of the box. From what I’ve understood it is possible to get this working, but it may take some work to get everything to work properly.
I didn’t bother for now because of rule #1 and because Microsoft is reportedly working on GUI support in future WSL releases anyway.
Regardless of whether you’ve read the whole thing or just the first line of this article, it should be pretty clear that my first experiences with Docker and WSL2 for PHP app development were somewhat underwhelming.
Note that this does not mean that Docker with WSL2 can’t be useful for you. I also tried building and running some of my other projects, which often use Python, Java, or Node.js, and a few smaller PHP projects. All of those worked just fine.
It also helps a lot if you can use the Visual Studio Code editor, which allows you to view and edit files in WSL directly from a Windows GUI without any performance penalties.
I’m pretty sure we will see some great improvements to WSL2 and Docker Desktop for Windows in the next few years. Until then I’m back on macOS with its okay-ish Docker experience, as sadly most of my work happens in and around the Monolith.
- My experiences with Docker Desktop for PHP apps on Windows were pretty dreadful. Your mileage may vary.