I have been an aggressive Kubernetes evangelist over the last few years. It has been the hammer with which I have approached almost all my deployments, and the one tool I have mentioned (shoved down clients throats) in almost all my foremost communications with clients, and it was my go to choice when I was mocking my first startup (saharacluster.com).
A few weeks ago Docker 1.13 was released and I was tasked with replicating a client's Kubernetes deployment on Swarm, more specifically testing running compose on Swarm.
And it was a dream!
All our apps were already dockerised and all I had to do was make a few modificatons to an existing compose file that I had used for testing before prior said deployment on Kubernetes.
And, with the ease with which I was able to expose our endpoints, manage volumes, handle networking, deploy and tear down the setup. I in all honesty see no reason to not use Swarm. No mission-critical feature, or incredibly convenient really nice to have feature in Kubernetes that I'm going to miss; except perhaps the Kube admin dashboard, and heapster but even those have ready replacements. Weave scope for Kube admin (admittedly not as pretty) and I could easily setup my own ELK to monitor my containers.
The moment it dawned on me how simple swarm was, was when I realised that all I had to do to expose an nginx service publicly was to publish the ports in my compose file. It hit me again when I attempted to create a number of replicas for the nginx service fully expecting to run into this Kubernetes like error Pod Deploy - Failed to fit in any node - PodFitsHostPorts that has frustrated me before, but no. It worked! It just worked! And to boot Docker intelligently loadbalanced the requests from all my nodes (nginx was accessible from all the node ips in the Swarm at port 80/443) to the various containers in the service.
Anyone who has used Kubernetes on any long-term large-scale project knows what a pain this is. If you're not on AWS or GCE and can't create a Loadbalancer service, where Kubernetes will provision an elastic ip address for your service (which you have to pay for), and you're not okay with having to access your service on a weird random port between default:30000-32767, then you have to deal with the fickle beast that is Kubernetes Ingresses. To elaborate on how many steps it takes to come close to replicating what I had achieved on Swarm with three lines in my compose file.
You'd have to do the following on Kubernetes
-
Create the ingress controller
-
Work on your Kubernetes yaml spec and define an ingress resource jumping around from the various documentation sources online
-
A bit of trial and error here to get to where you can create your ingress without error
-
Realising that to use paths i.e. example.com/path you have to create a "path" directory in the friggin /usr/share/nginx/html in the nginx container
Full docs here Kubernetes Ingresses
In short, exposing services to the outside world in Kubernetes is a pain! While with Docker however all it takes is
services:
nginx:
ports:
- "80:80"
- "443:443"
...
And it works! It just works!
How to use volumes in Swarm
version: '3'
volumes:
poc:
services:
redis:
volumes:
- poc:/redis
How to use volumes in Kubernetes :(
-
Create the pv sample
-
Create the pvc sample
-
Create the deployment specifying your pvc sadness sample
Configs
I am incredibly appreciative of how easily I can eyeball my entire deployment in Swarm, ports, volumes, services, dependencies, images etc etc as all the config is in one docker-compose.yml file of reasonable length as opposed to the countless files covering everything from pvs, pvcs, deployments, statefulsets etc in Kubernetes. You can define everything in one file in Kubernetes but it won't do much for readability.
[Think how many lines you need to pore through to get to the container image you're using]
Deploying and cleaning up
You have to run kubectl create -f
more than once. You know it. I know it. (Unless of course you put everything into one file and trade your readability for convenience).
With compose on Swarm however, all you have to do is:
docker stack deploy --compose-file=docker-compose.yml <stack-name>
You could delete your app's entire namespace in Kubernetes to cleanup, but what if that's not you want? Or you didn't have the foresight to deploy your app in a separate namespace? You'd again have to run a number of kubectl delete -f
commands to delete everything. You could run kubectl delete -f
on a single directory with all your app's kube config files in there. You could do that.
Or you could use Swarm and be able to,
docker stack rm <stack-name>
And have a life.
BONUS:
Pure Docker goodness
It is beyond satisying to use Docker and only Docker. To spin up a fresh vm and install Docker and only Docker. And be able to do everything you need. I have spent countless hours writing salt files and ansible playbooks to automate installing Kubernetes. You could use kargo, or kops, but all I have to do to start a Swarm cluster is install Docker and run docker Swarm init
. What more could anyone want!
What Kubernetes could do:
Humans should not have to write/read config files. If there was a way I could easily deploy a Kubernetes cluster (Hint: Make up-to-date repositories for your software available from the distros repos), and not have to write any configs, that would be great.
Most of what you mention is on the operations side of things. If you get Kubernetes managed and Ingress Controller and Storage Classes have been set up for you, all that is left is managing the admittedly huge amount of YAMLs and all the boilerplate that comes with them. There's kompose (mentioned above) that is being worked on more recently, and that basically does what you want above. I know there's also other efforts by various people in the community to make deployment definitions easier.
Generally, I agree though that UX is bad in Kubernetes, there's a general lack of "making it easy for the user" and also lots of confusion around where to find the fitting documentation (e.g. with an NGINX Ingress Controller you have to look at 3 different places and at the right commits for your deployed release to find the right flags you can set). It seems there never was a focus on this. It's something we as a community need to work on more.
What I fear with Docker Swarm is on the one side the lock-in and with that being helplessly exposed to any decision Docker will make in the future, be that what you need or not. On the other side it's also that Kubernetes has a reason for its complexity and the power it gives you with things like being able to have all kinds of other services (e.g. headless) make it able to cope with lots of strange use cases that are out of the ordinary.