In my last article, I laid out the key values you get from running systems in containers. Go and have a read if you want a refresher on that. This time, I’m going to talk more about my experiences running container workloads on Windows. The good. The bad. And the ugly (yes, I really went there. Sorry).
Let’s start with the headline shall we before digging into some of the detail? Yes, you can absolutely run your Windows application workloads within containers AND take advantage of all those benefits of containerisation BUT containers on Windows just isn’t as mature as running containers on Linux and there’s far less documentation and guidance around. Just expect to do more experimentation, debugging and tweaking of settings. I’d also discourage starting your container journey from scratch purely on Windows. It is possible, but understanding all the concepts from a Linux perspective and then transitioning to Windows is a lot easier. Yes, this does mean that you’ll need to spend some time getting your Linux admin skills up to date, but it’s all useful knowledge.
So, I’ve set your expectations necessarily low? Great! Let’s begin.
The easiest way to get started with containers on Windows, setting up a test or development environment, is to install Docker Desktop[1]. Be clear that the licensing terms work for your situation, but this is the simplest way forward.
Docker Desktop comes with everything you’ll need to get going on a Windows computer, covering both Linux and Windows workloads. So you don’t need a separate virtual machine if you’re only interested in Linux containers. Docker Desktop also comes with the option of a Kubernetes install[2], but don’t worry about that for now. The installer will sort out all the configuration for you, so it’s a great way to get an environment up and running quickly for trying things out.
If you’re looking to run containers on a Windows Server, and for production usage, you’re not going to be using Docker Desktop. I’m sure that you weren’t thinking about that, but seriously, don’t do it. Don’t even think about it. For Windows servers, you can install the Docker engine and client tools from the available binaries download here (https://docs.docker.com/engine/install/binaries/#install-server-and-client-binaries-on-windows) Originally, I used the PowerShell instructions still available on the internet, but taking the ‘binaries’ approach really is pretty easy and it makes it really easy to upgrade to newer versions[3].
All done? Great, let’s start running Windows containers. With each install, and that includes server installs, I like to do the traditional Hello World of Docker and run the following, making sure that you have Windows container enabled in Docker Desktop:
docker run hello-world
That works for both Linux and Windows and just makes sure everything is working correctly. It’ll make sure that the Docker client is in your system path and that you’ve got the necessary permissions to run it.
For Linux containers, that’s your first taste of deploying some pre-packaged software. You’ve probably come looking at Docker as a way of easily installing existing 3rd party applications. Well, head on over to Docker Hub (https://hub.docker.com/), find a pre-built image for the software you’re after, and fill your boots. When it comes to Windows containers, sadly, the story isn’t quite the same. Microsoft do an excellent job of providing pre-build and regularly updated base images for their platform, including .Net Framework versions and newer .Net (Core) versions. They’re great, and you’ll use them a lot….but if you’re looking for a readily installable Windows software image, you’re going to be looking for a long time (and not find much). Sorry, that’s just the way it is. Application vendors for software that runs on Linux have really jumped on board with providing official Docker images. For Windows? Not so much. Microsoft don’t even provide a SQL Server image any more (it only ever got to the beta stage).
Right, is that all the negative stuff out the way so we can focus on what you CAN do rather than what you CAN’T? Well, no not quite, there are a couple of other things I need to highlight when it comes to running container workloads on Windows. After that, we’ll talk about all the positives.
After working with containers on Windows for a certain length of time, you come up against the dreaded Windows Server version compatibility (https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-2022%2Cwindows-11) issues[4]. In the above link goes a great way to explain the details. In summary, due to the way that Windows is implemented, container images build for one version of Windows Server won’t automatically run on another. Microsoft are working on addressing this and newer version of Windows have better compatibility, but you’re still left with two options:
1) Build your application image for the target server version
2) Use hyper-v isolation on the server (explained in the above link).
In my experience, I tend to apply the first work around for most of my container images, although in recently adding a Server 2022 vm to our Docker cluster, I’ve turned on hyper-v isolation by default[5], so that it works with all the existing 2019 targeted images out of the box. It just adds a bit of overhead to starting up each container, which is manageable in our workloads, but might not be acceptable for your workloads.
The other missing capability that I come up against regularly is persistent storage for containers. This is one area that Windows containers lags far behind Linux. You might have heard about the ephemeral nature of containers, which is one of the strengths of containers in my opinion, but there are definite situations when you have a container that needs some storage that persists between container instantiations[6][7]. In these cases, you basically have one options (not quite true for k8s, as you’ll see in a future article) and that is bind mounts (https://docs.docker.com/storage/bind-mounts/), which is a pain when you want to have multiple Windows hosts and allow those containers to run on any of those hosts. Well, there’s no magic docker way of dealing with it (unlike the volume plugins for Docker on Linux), you need to configure some shared storage on those Windows hosts and bind mount folders within that shared storage for your containers. In our case, we’ve been using DFS replication (https://learn.microsoft.com/en-us/windows-server/storage/dfs-replication/dfsr-overview) and that has been working well for our needs. It’s just yet one more thing to set up on the host when we provision a new host.
Another niche area that you should be aware of is use of Windows Server features within Windows containers. There are limits on which server features you can use in a container. For example, you can’t enable and create an Active Directory domain running from within a container[8], whereas you could do this on Linux, running Samba in a container. While, adding extra features to IIS, like turning on Classic ASP[9] or the URL rewriting module, is possible and works well.
So, what are the good bits then, if you’re listing all these things we can’t do? Well, once you deal with compatibility issues and persistent storage, with recent (Windows 10, Server 2019) versions of Windows, you’ve got pretty much everything else you need to containerise your workloads and Microsoft keep improving the experience with each new version of Windows. Networking just works, including overlay networking, scheduling of containers just works, and as you’ll see in the next article, clustering hosts using Docker Swarm Mode just works out of the box.
We might not have a load of pre-built application images targeting Windows but Microsoft provide updated Windows base images (https://hub.docker.com/_/microsoft-windows-base-os-images), with a choice of starting point targeting different needs and with different sizes.
The ‘nanoserver’ image is the smallest, with the least extra stuff pre-installed. I do find this one a bit too ‘barebones’ for a lot of our use cases, but it has it’s uses. ‘servercore’ has been the basis for most of my Windows images until Windows Server 2022 came along, supporting the ‘server’ image, which I’ve been using ever since. Microsoft also provide ready-to-go .net SDK and runtime images (both .net Framework and .net Core), which have been great for easily deploying our .net apps.
From these images, we’ve been able to deploy Classic ASP sites, .net Framework sites, .net Core sites, and even containers that run PowerShell scripts on a schedule.
What’s next? Well, once you’ve got your containers running, you need to be thinking about the delight that is Container Orchestration and what options you have there. That’s what we’ll cover in the next article.
[1] https://docs.docker.com/desktop/install/windows-install/ Use the WSL2 backend if at all possible. It just makes things easier.
[2] I’ll talk about the pros and cons of this in a later article
[3] Just stop the Docker service, replace the binaries with the newer version, after reading the release notes of course, and start the service again. Nice and easy.
I’ve really said ‘easy’ quite a lot in that last couple of sentences. It’s odd to read, I know, but given how difficult or annoying other aspects of running Docker on Windows is, I really do want to highlight the easy stuff.
[4] just be aware that Linux containers aren’t completely immune to similar kernel compatibility issues, it’s just that due to the way most application software interacts with the kernel, you don’t come across these in many use cases.
[5] This is done via the daemon.json file, with the following line:
“exec-opts”:[“isolation=hyperv”]
You can enable hyper-v isolation on a per container configuration, but for us that would have meant updating all the config files to include this setting and those containers would have run in hyper-v mode when running on 2019 too, which wasn’t ideal.
[6] This is also really important when running containers on a cluster of hosts, where the container could be scheduled on any of the hosts – yes you can limit which hosts the container runs on, but you shouldn’t just to work around issues like this.
[7] One of the common situations where I’d have a container with persistent storage was linked to configuration files that I’d like to change. I think it’s also worth highlighting that there are several options for managing this, from environment variables, to secrets, to config settings, which should cover a wide array of these scenarios. So, do look for the appropriate settings before just assuming that you need to have persistent storage
[8] Note that you can’t run an AD domain from a container, you can run containers that are part of an AD domain. Microsoft has thorough documentation here: https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
[9] note that you do this in your Dockerfile when building the container image, running the following:
SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop’; $ProgressPreference = ‘SilentlyContinue’;”]
RUN Install-WindowsFeature -Name Web-ASP; Install-WindowsFeature -Name Web-ISAPI-Ext