Written on 12th December 2023 - 7 minutes

Why you might not actually need Kubernetes

Why you might not actually need Kubernetes

We delve into the question “Why you might not actually need Kubernetes?” Explore the complexities, risks, and considerations before jumping into the world of container orchestration.

Let’s start with that provocative title, shall we? This is a common theme of discussion on the internet. Let’s be honest, k8s is complicated. With that complication, comes risks. The risk of accidentally leaving some service open to abuse, or not configuring something correctly so you get poor performance, or excess hosting costs. I’ve seen it posited that k8s has been designed for enterprises at Google’s scale, and most people aren’t running at that scale, so do you actually need Kubernetes?

There’s a grain of truth to that view. K8s includes a lot of capability that you might never take advantage of. This capability that you don’t need might get in your way and slow you down. Yes, perhaps, but out of the box it also comes with pretty much everything that you need in a production container cluster. You don’t have to go setting up some custom solution that needs extra investigation, documentation, and training. There’s a lot to be said for walking the well-trodden path. Things are more likely to work first time, whereas I’ve had all sorts of fun finding that configuration that works ‘just so’ for Swarm Mode. These decisions are always a trade-off, based on your priorities and needs for your situation. You might find that Swarm Mode gives you everything you need. You might find that, with a couple of custom add-ons, you’re good to go and the whole system and all its moving parts can be understood. Good for you. That’s a completely valid situation.

Other things to consider are: the absolute need for a Linux node for the control plane. You just can’t get away from that. A production environment of k8s is much harder to set up than Swarm Mode, but then there are distributions[1] of k8s that make this easier and all major cloud vendors have their own managed k8s offering, so you do have plenty of options. Just don’t go down the road of installing all the components yourself – a lot of maintenance headaches would be ahead.

Now let’s talk about k8s and Windows for a brief moment. Kubernetes has supported Windows workloads, including adding Windows hosts to the cluster since v1.14, released back in 2019. So it’s stable, useable and a supported use case. That’s the good part. Setting up a development environment for k8s that will run Windows workloads is more effort, sadly[2]. I’ve tried a few options and your best bet is to install Microk8s onto an Ubuntu virtual machine, add a Windows Server virtual machine to that network and follow these instructions: https://microk8s.io/docs/add-a-windows-worker-node-to-microk8s. You’re limited in the choice of Storage Classes on the Windows host, but for development, that would be fine. When it comes to production, I feel you’re better off looking at a managed solution from one of the major cloud vendors. For Windows support, no surprises, Microsoft have this covered very well. They make it very easy to deploy k8s on Azure with Windows nodes. Amazon also support Windows nodes in EKS, but there’s more effort required on your part (go to https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html to enable Windows support, then you can add a managed node group for Windows). Google’s GKE also supports Windows nodes (https://cloud.google.com/kubernetes-engine/docs/concepts/windows-server-gke). I’ve not had any experience with that, so I can’t comment on how well that works.

Now let’s dig a bit deeper into k8s and understand more about how it directly compares to Swarm Mode. For me, this has been the easiest way to k8s. Rather than going straight to k8s, I found building up that container knowledge, step-by-step, a lot easier.

Swarm Mode Kubernetes Comments
Nodes Nodes Pretty much the same concept here. These are the hosts that make up your cluster.
Managers, leader and Workers Control Plane and Workers In Swarm Mode, the hosts that manage the cluster are all ‘manager’ nodes, with one of them elected as the ‘leader’. Workers are part of the cluster and are allocated containers. In k8s, the equivalent to the managers is the control plane, a set of hosts that manage the cluster. Worker nodes work in the same way.
Stacks Deployments The highest level of workload in Swarm Mode is the ‘stack’. This entity groups the services, networks, volumes, secrets, configs together into a system.

In k8s, the Deployment object is probably the most analogous to a Swarm mode service. The Deployment defines the update strategy and self-healing settings.

Services ReplicaSets/ StatefulSets/ DaemonSets Services in Swarm Mode are where all the details of the individual parts of your application are defined. This includes the container image, number of instances, constraint and placement preference settings around where the container(s) are hosted, update strategy settings (which in k8s are covered by the Deployment object), and a few other settings about the actual containers run by the orchestrator.

In k8s, these settings are spread around other objects depending upon what you’re trying to achieve. The closest aligned objects are the ReplicaSet (used for managing a set of pod replicas), StatefulSet (used for deploying your app’s stateful components), and DaemonSet (used to ensure your nodes each run a copy of a pod). The settings for each of these are, in essence, combined into the one set of settings for the Service in Swarm Mode. For example, if you want a container to run on each host, you’d use a DaemonSet in k8s and set the Service’s ‘deploy, mode’ setting to ‘global’ in Swarm Mode. If you just want a scalable set of your container, you’d use ReplicaSet in k8s and the ‘deploy, mode’ and ‘deploy, replicas’ settings on the Service in Swarm Mode.

This structure in k8s, while more confusing at first, allows for greater extensibility. So, if there was a new style of pod deployment for k8s, a new object type and controller can be created without impacting the core k8s system. While in Swarm Mode, the Service object would need to be modified, the compose file format updated, and associated tools updated to support those changes.

Task Pod You don’t define Tasks directly in Swarm Mode, they get created for each instance of the Service required by the orchestrator.

In k8s, Pods are the lowest level object that you directly interact with, although in k8s a Pod can contain multiple containers that are always deployed together, whereas Tasks in Swarm Mode represent a single container.

Networks Networks, Network Policies Now, networks in k8s can be quite complicated. They start out as a flat network space, with all pods in, but you can do quite a lot more, should you need it. Networking in k8s follows a plugin model and there are multiple network providers that you can use depending upon your needs. When you look at cloud-hosted k8s clusters, you’ve also got options to link into virtual networks.

To manage and control communications between pods, you need to deploy NetworkPolicy definitions.

 

In Swarm Mode, Stacks have their own isolated network by default. You can specify multiple networks, and link between stacks when you need to, but the default situation starts with isolation.

Ingress Network Ingress, LoadBalancer, EndPoints, and Services. The ingress feature of Swarm Mode is quite easy to use. It doesn’t exist as a separate object in Swarm Mode, exactly, but as a feature of the overall orchestrator. When you initialise Swarm Mode, an overlay[3] network called ‘ingress’ is created that allows for incoming communication to the cluster to connect to exposed ports from one or more containers.

This is the extent of out-of-the-box ingress control that you get with Swarm Mode.

Ingress and LoadBalancer are first-class objects within k8s, with Ingress covering the kind of reverse proxy traffic management I was talking about in my previous article and LoadBalancer handling direct access to ports exposed from containers.

Secrets and Configs Secrets and ConfigMaps The similarities between Swarm Mode and k8s are pretty clear here. The syntax is different but the mechanisms for use work in a very similar way.
Volumes StorageClasses, PersistentVolumes and PersistentVolumeClaims When it comes to persisting the data of your containers, Swarm Mode appears quite simple. You’ve got the Volume definition associated with a service. It does get a bit more complicated as there are now Volume plugins that you can install to access different storage devices and mechanisms in a consistent way….except that Swarm Mode on Windows doesn’t support plugins, so any Windows workloads pretty much have to use bind mounts and then some underlying operating system mechanism to replicate the data between hosts in the cluster.

K8s storage initially looks more complicated with StorageClasses, PersistentVolumes and PersistentVolumeClaims. This is probably where k8s is a bit more complicated than it needs to be, unless you are at Google’s scale, but really you’re just defining a PersistentVolume and then a PersistentVolumeClaim to link up storage to a pod rather than the one Volume definition in Swarm Mode. It’s not that bad, really.

[missing] Role-based Access Control The default security setup for Swarm Mode is pretty basic. If you have access to it, you’ve got access to everything. There’s no out of the box Role Based Access Control (RBAC).

K8s has RBAC out of the box, as standard. The cloud-based versions can also integrate with the native security mechanisms of the cloud, so you have that option too.

[missing] Jobs and CronJobs There’s no built-in support for timed jobs in Swarm Mode, as I’ve discussed previously. You can use a third-party container image to run timed jobs, so it’s not difficult to achieve.

Again, k8s has full support for timed jobs, via the Job and CronJob objects, out of the box.

Hopefully, you can now see the parallels between the two orchestrators. There are a lot of similar concepts that they both share. As I’ve said, I do see this as a different way to gain proper understanding of k8s, by learning Docker, Docker Compose, and Swarm Mode first.

Next time, in the final article, I’m going to summarise the key points of everything I’ve discussed, briefly talk about alternative tools to the Docker command line, and look at some recommendations for what to do in different situations.

[1]  Ok, so there isn’t really one complete system that you can absolutely call, definitively, ‘Kubernetes’. Rather, it is a collection of standards and multiple implementations of those standards. These implementations are bundled together into ‘distributions’ of Kubernetes. Often, these distributions are certified to be compliant with the collection of standards so you know that your Deployment definitions etc will work on them without issue.

This can be really confusing to the newcomer to k8s. I’d generally advise you to not worry about, for example, which CNI implementation is most suitable for your needs until you’ve got some experience deploying software to k8s. Start with a dev environment using Minikube or with a managed installation in your preferred cloud.

[2] If you just want to run a Linux-based k8s environment on Windows, then you’ve got lots of options. Docker Desktop includes k8s. Minikube (https://minikube.sigs.k8s.io/docs/start/) has supported Windows installations for ages. You can install Microk8s (https://microk8s.io/) in Windows. You can run Kind (https://kind.sigs.k8s.io/) in WSL2 quite happily. Just be aware that all of these options include some form of virtualisation though, so your dev environment must support that.

[3] An overlay network exists on all hosts in the cluster and allows for the communication between containers irrespective of where within the cluster that container is hosted.

Share this post

Contact Us