It was as much personal "taste" than anything, and I would describe the choice as similar to preferring JSON over XML.
For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.
I feel the same. I feel like it's a me problem. I was able to build and run massive systems at scale and never used kubernetes. Then, all of a sudden, around 2020, any time I wanted to build or run or do anything at scale, everywhere said I should just use kubernetes. And then when I wanted to do anything with docker in production, not even at scale, everywhere said I should just use kubernetes.
Then there was a brief period around 2021 where everyone - even kubernetes fans - realised it was being used everywhere, even when it didn't need to be. "You don't need k8s" became a meme.
And now, here we are, again, lots of people saying "just use k8s for everything".
I've learned it enough to know how to use it and what I can do with it. I still prefer to use literally anything else apart from k8s when building, and the only time I've ever felt k8s has been really needed to solve a problem is when the business has said "we're using k8s, deal with it".
It's like the Javascript or WordPress of the infrastructure engineering world - it became the lazy answer, IMO. Or the me problem angle: I'm just an aged engineer moaning at having to learn new solutions to old problems.
I also re-investigated containerization - weighing Docker Swarm vs K3s - and settled on Docker Swarm.
I’ve hated it ever since. Swarm is a PITA to use and has all kinds of failure modes that are different than regular old Docker Compose.
I’ve considered migrating again - either to Kubernetes, or just back to plain Docker - but haven’t done it. Maybe I should look at Uncloud?
With k8s you write a bunch of manifests that are 70% repetitive boilerplate. But actually, there is something you need that cannot be achieved with pure manifest, so you reach for Kustomize. But Kustomize actually doesn't do what you want, so you need to convert the entire thing to Helm.
You also still need to spin up your k8s cluster, which itself consists of half a dozen pods just so you have something where you can run your service. Oh, you wanted your service to be accessible from outside the cluster? Well, you need to install an ingress controller in your cluster. Oh BTW, the nginx ingress controller is now deprecated, so you have to choose from a handful of alternatives, all of which have certain advantages and disadvantages, and none of which are ideal for all situations. Have fun choosing.
That’s not bad, but I want to spend more time trying new things or enjoying the results of my efforts than maintaining the underlying substrates. For that purpose, K8s is consistently too complicated for my own ends - and Uncloud looks to do exactly what I want.
And if you want to use more than one machine then you run `docker swarm init`, and you can keep using the Compose file you already have, almost unchanged.
It's not a K8s replacement, but I'm guessing for some people it would be enough and less effort than a full migration to Kubernetes (e.g. hobby projects).
If you have a service with a simple compose file, you can have a simple k8s manifest to do the same thing. Plenty of tools convert right between the two (incl kompose, which k8s literally hands you: https://kubernetes.io/docs/tasks/configure-pod-container/tra...)
Frankly, you're messing up by including kustomize or helm at all in 80% of cases. Just write the (agreed on tedious boilerplate - the manifest format is not my cup of tea) yaml and be done with the problem.
And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).
You don't need to touch an ingress until you actually want external traffic using a specific hostname (and optionally tls), which is... the same as compose. And frankly - at that point you probably SHOULD be thinking about the actual tooling you're using to expose that, in the same way you would if you ran it manually in compose. And sure - arguably you could move to gateways now, but in no way is the ingress api deprecated. They very clearly state...
> "The Ingress API is generally available, and is subject to the stability guarantees for generally available APIs. The Kubernetes project has no plans to remove Ingress from Kubernetes."
https://kubernetes.io/docs/concepts/services-networking/ingr...
---
Plenty of valid complaints for K8s (yaml config boilerplate being a solid pick) but most of the rest of your comment is basically just FUD. The complexity scale for K8s CAN get a lot higher than docker. Some organizations convince themselves it should and make it very complex (debatably for sane reasons). For personal needs... Just run k3s (or minikube, or microk8s, or k3ds, or etc...) and write some yaml. It's at exactly the same complexity as docker compose, with a slightly more verbose syntax.
Honestly, it's not even as complex as configuring VMs in vsphere or citrix.
https://kubernetes.io/docs/concepts/services-networking/serv...
Might need to redefine the port range from 30000-32767. Actually, if you want to avoid the ingress abstraction and maybe want to run a regular web server container of your choice to act as it (maybe you just prefer a config file, maybe that's what your legacy software is built around, maybe you need/prefer Apache2, go figure), you'd probably want to be able to run it on 80 and 443. Or 3000 or 8080 for some other software, out of convenience and simplicity.
Depending on what kind of K8s distro you use, thankfully not insanely hard to change though: https://docs.k3s.io/cli/server#networking But again, that's kind of going against the grain.
As for grabbing 443 or 80, most distros support specifying the port in the service spec directly, and I don't think it needs to be in the range of the reserved nodeports (I've done this on k3s, worked fine last I checked, which is admittedly a few years ago now).
As you grow to more than a small number of exposed services, I think an ingress generally does make sense, just because you want to be able to give things persistent names. But you can run a LONG way on just nodeports.
And even after going with an ingress - the tooling here is pretty straight forward. MetalLB (load balancer) and nginx (ingress, reverse proxy) don't take a ton of time or configuration.
As someone who was around when something like a LAMP stack wasn't "legacy", I think it's genuinely less complicated to setup than those old configurations. Especially because once you get it right in the yaml once, recreating it is very, very easy.
The network is complicated by the overlay network, so "normal" troubleshooting tools aren't super helpful. Storage is complicated by k8s wanting to fling pods around so you need networked storage (or to pin the pods, which removes almost all of k8s' value). Databases are annoying on k8s without networked storage, so you usually run them outside the cluster and now you have to manage bare metal and k8s resources.
The manifests are largely fine, outside of some of the more abnormal resources like setting up the nginx ingress with certs.
Especially in-house on bare metal.
Was what i was responding to. It's not the app management that becomes a pain, it's the cluster management, lifecycle, platform API deprecations, etc.
You would not be able to operate hundreds or thousand of any nodes without operation complexlity and k8s helps you here a lot.
I have struggled to get things like this stood up and hit many footguns along the way
The clear target of this project is a k8s-like experience for people who are already familiar with Docker and docker compose but don't want to spend the energy to learn a whole new thing for low stakes deployments.
A normal person wouldn't think 'hey lets use k8s for the low stakes deployment over here'.
I'm afraid I have to disappoint you
K3s is just a repackaged, simplified k8s distro. You get the same behavior and the same tools as you have any time you operate an on-premises k8s cluster, and these, in my experience, are somewhere between good and excellent. So I can't imagine what you have in mind here.
"It's still essentially adding few hundred thousand lines of code into your infrastructure"
Sure. And they're all there for a reason: it's what one needs to orchestrate containers via an API, as revealed by a vast horde of users and years of refinement.
...the fact it's still k8s which is a mountain of complexity compared to near anything else out there ?
It seems that way but in reality "resource" is a generic concept in k8s. K8s is a management/collaboration platform for "resources" and everything is a resource. You can define your own resource types too. And who knows, maybe in the future these won't be containers or even linux processes? Well it would still work given this model.
But now, what if you really just want to run a bunch of containers across a few machines?
My point is, it's overcomplicated and abstracts too heavily. Too smart even... I don't want my co workers to define our own resource types, we're not at a google scale company.
Since k8s is very effective at running a bunch of containers across a few machines, it would appear to be exactly the correct thing to reach for. At this point, running a small k8s operation, with k3s or similar, has become so easy that I can't find a rational reason to look elsewhere for container "orchestration".