Written on 9th November 2023 - 4 minutes

Container Orchestration

Container Orchestration

Jon Stace, Head of Technology shares information showcasing expertise on container orchestration – managing lifecycles, scaling, and more.  

In the last couple of articles, I’ve talked about why you should be interested in containers and the pros and cons of running Windows workloads on containers. So, you’re all set, right? I’ve convinced you that you should be running your applications in containers, including your Windows applications. You’ve finely crafted your Dockerfile, and tested everything locally. Perhaps you’ve deployed this onto a production server and you’re happy. What now? Well, my friend, the fun is only just starting[1].

You now need to be thinking about container orchestration[2]. What’s that all about? Well, it’s really just a fancy way of talking about managing your containers. Managing their lifetime, scaling options, network, and stuff like that.

Out of the box, Docker provides a couple of options for orchestration and then there’s always Kubernetes[3]. But, you don’t necessarily have to immediately jump to Kubernetes. K8s has a lot of moving parts and can be quite intimidating when you first look at it. The chances of mis-configuring something are higher if you go straight down that route. It’s my view that if you understand your other options you’ll be in a better position to make the right choice for you, and learning about these other options works well as a stepping stone to learning about k8s.

Where do we start then? Well, with the Docker command line of course. Out of the box, Docker provides network and port management, handling volumes for persistent data, CPU and memory limits, and environment variables for configuring the application. That covers quite a lot of what you’d want to do when managing your containers. There’s another feature that you get from the command line and that’s the restart policy flag (–restart). This allows you to control when and why a container might restart, e.g. if it crashes. Couple this with the healthcheck features (https://docs.docker.com/engine/reference/run/#healthcheck) and you’ve got some great control over your containers and their lifecycle.

In your rush to get straight to Kubernetes, did you realise that all that power was available just from the Docker command line?

So that’s all good and well, but really in that scenario we’re only talking about single containers, and managing them from the command line isn’t exactly ideal – you’d probably end up creating lots of scripts to run your containers, which isn’t a good long term solution. This is where Docker Compose comes in[4]. Compose files not only allow you to declare all of the settings about your containers in one place, but if your system needs multiple containers, you can configure it all from one place: the one file. And you can version control that file. You can use a consistent, documented approach for all your containers and banish those ad hoc scripts. All the orchestration benefits of the Docker command line, in a much more manageable form.

But, that’s not really adding anything ‘orchestration’ really, is it? Sure, the convenience factor of composing files is great, but we need to take it up a notch.

This is where Swarm Mode comes in[5]. With Swarm Mode, you get all the existing Docker features and then some. You can specify ‘config’ settings for your containers and ‘secrets’ that are managed across a cluster of hosts. That’s the key thing there, the big upgrade from compose files: multi-host clusters, all managed centrally.

Now, Swarm Mode doesn’t come without its downsides (see my next article for the specifics) and Docker themselves haven’t exactly helped by sending out mixed signals about the longevity of Swarm Mode, but there’s a LOT to like. From a learning curve perspective, Swarm Mode uses the same compose file format, so if you’re familiar with that, you’re in a really strong place. It uses a consensus algorithm which means on small clusters, you don’t need a dedicated control node (yes, you can do this on k8s too). If you’ve got Docker installed, you don’t need to install anything else to enable Swarm Mode, it’s right there. And (and), AND, you can run it purely on Windows if your workload is made up of Windows containers only. No need to maintain a Linux control node like k8s does. These last couple of points, to me, are killer features for moving along your journey of container orchestration.

So, Swarm Mode is pretty useful right? Why is everyone talking about k8s all the time? What’s all the fuss about? Well, next time, we’re going to talk about what features Swarm Mode really could do with to make it a really useful orchestrator, but doesn’t have.

 

[1] for a very specific, limited amount of fun, of course.

[2] cue loud fanfare

[3] There are other orchestration options too, but I’m really going to focus on the mainstream ones, that have decent documentation, here.

[4] Originally docker-compose was a standalone command, built in Python. You installed it separately to the ‘docker’ command line. In the last couple of years, ‘compose’ has been reimplemented and extended as a subcommand of the ‘docker’ command. So instead of ‘docker-compose’ you can now use ‘docker-compose’ in the same style as the other Docker subcommands.

[5] Just be aware that originally there was a container orchestrator called ‘Docker Swarm’, which was replaced in 2016 with ‘Swarm Mode’ (see here for more details: https://dockerlabs.collabnix.com/intermediate/swarm/difference-between-docker-swarm-vs-swarm-mode-vs-swarmkit.html). Just keep that in mind when searching for documentation, tutorials, etc. Fortunately, the documentation for Swarm Mode is all in the same location as the base Docker documentation, which makes it easier to navigate.

Share this post

Contact Us