With single instances topping out at 20+ TBs of RAM and hundreds of cores, I think this is likely very under-explored as an option
Even more if you combine this with cell-based architecture, splitting on users / tenants instead of splitting the service itself.
* Very flexible, but rigid deployments (can build anywhere, deploy from anywhere, and roll out deployments safely with zero downtime)
* Images don't randomly disappear (ran into this all the time with dokku and caprover)
* If something goes wrong, it heals itself as best it can
* Structured observability (i.e. logs, metrics, etc. are easy to capture, unify, and ship to places)
* Very easy to setup replicas to reduce load on services or have safe failovers
* Custom resource usage (I can give some pods use more/less CPU/memory limits depending on scale and priority)
* Easy to self-host FOSS services (queues, dbs, observability, apps, etc.)
* Total flexibility when customizing ingress/routing. I can keep private services private and only expose public services
* Certbot can issue ssl certs instantly (always ran into issues with other self-hosting platforms)
* Tailscale Operator makes accessing services a breeze (can opt-in services one by one)
* Everything is yaml, so easy to manipulate
* Adding new services is a cake-walk - as easy as creating a new yaml file, building an image and pushing it. I'm no longer disincentivized to spin up a new codebase for something small but worthwhile, because it's easy to ship it.
All-in-all I spent many years trying "lightweight" deployment solutions (dokku, elastic beanstalk, caprover, coolify, etc.) that all came with the promise of "simple" but ended up being infinitely more of a headache to manage when things went wrong. Even something like heroku falls short because it's harder to just spin up "anything" like a stateful service or random FOSS application. Dokku was probably the best, but it always felt somewhat brittle. Caprover was okay. And coolify never got off the ground for me. Don't even get me started on elastic beanstalk.I would say the biggest downside is that managing databases is less rigid than using something like RDS, but the flip side is that my DB is far more performant and far cheaper (I own the CPU cycles! no noisy neighbors.), and I still run daily backups to external object storage.
Once you get k8s running, it kind of just works. And when I want to do something funky or experimental (like splitting AI bots to separate pods), I can go ahead and do that with ease.
I run two separate k8s "clusters" (both single node) and I kind of love it. k9s (obs. tool) is amazing. I built my own logging platform because I hated all the other ones, might release that into its own product one day (email in my profile if you're interested).
If you need to deploy it elsewhere, you just install k3s/k8s or whatever and apply the yamls (except for stateful things like db).
IT also handles name resolution with service names, restarts etc.
IT's amazing.
Configure the init flags to disable all controllers and other doodads, deploy them yourself with Helm. Helm sucks to work with but someone has already gone through the pain for you.
AI is GREAT at K8s since K8s has GREAT docs which has been trained on.
A good mental model is good: It's an API with a bunch of control loops
Then you are off to races. you can add more nodes etc later to give it a try.
Use k9s (not a misspelling) and headlamp to observe your cluster if you need a gui.
It was a Rails app, therefore easy to get into the N+1 but also somewhat easy to fix.
def test_homepage_queries(django_assert_max_num_queries, client):
with django_assert_max_num_queries(10):
assert client.get("/").status_code == 200
Or django_assert_num_queries to assert an exact number.Probably some of the most valuable code I've ever written on a per LOC basis lol.
But anyhow, merging that into a new project was always a fun day. But on the other side of the cleanup the app stops falling down due to memory leaks.
stories = get_stories(query)
which results in a SQL query like SELECT id FROM stories WHERE author = ?
with the '?' being bound to some concrete value like "Jim".Then, the framework will be used to do something like this
for id in stories {
story = get_story_by_id(id)
// do something with story
}
which results in N SQL queries with SELECT title, author, date, content FROM stories WHERE id = ?
and there's your N+1It's usually a big problem for database performance because each query carries additional overhead for the network round trip to the database server.
SQLite queries are effectively a C function call accessing data on local disk so this is much less of an issue - there's an article about that in the SQLite docs here: https://www.sqlite.org/np1queryprob.html
We ran into this while building, funnily enough, a database management app called DB Pro (https://dbpro.app) At first we were doing exactly that: query for all schemas, then for each schema query its tables, and then for each table query its columns. On a database with hundreds of tables it took ~3.8s.
We fixed it by flipping the approach: query all the schemas, then all the tables, then all the columns in one go, and join them in memory. That dropped the load time to ~180ms.
N+1 is one of those things you only really “get” when you hit it in practice.
A naive ORM setup will often end up doing a 1 query to get a list of object it needs, and then perform N queries, one per object, usually fetching each object individually by ID or key.
So for example, if you wanted to see “all TVs by Samsung” on a consumer site, it would do 1 query to figure out the set of items that match, and then if say 200 items matched, it would do 200 queries to get those individual items.
ORMs are better at avoiding it these days, depending on the ORM or language, but it still can happen.
It used to be a very common pitfall - and often not at all obvious. You’d grab a collection of objects from the ORM, process them in a loop, and everything looked fine because the objects were already rehydrated in memory.
Then later, someone would access a property on a child object inside that loop. What looked like a simple property access would silently trigger a database query. The kicker was that this could be far removed from any obvious database access, so the person causing the issue often had no idea they were generating dozens (or hundreds) of extra queries.
I really wish there was a way to compose SQL so you can actually write the dumb/obvious thing and it will run a single query. I talked with a dev once who seemed to have the beginnings of a system that could do this. It leveraged async and put composable queryish objects into a queue and kept track of what what callers needed what results, merged and executed the single query, and then returned the results. Obviously far from generalizable for arbitrary queries but it did seem to work.
e.g. for ActiveRecord there's ar_lazy_preloader[0] or goldiloader[1] which fix many N+1s by keeping track of a context: you load a set of User in one go, and when you do user.posts it will do a single query for all, and when you then access post.likes it will load all likes for those and so on. Or, if you get the records some other way, you add them to a shared context and then it works.
Doesn't solve everything, but helps quite a bit.
I once was responsible for migrating a legacy business app to Azure, and the app had a local MSSQL server co-running with the app (the same pattern that Litestream is using).
As have been mentioned below, the app had been developed assuming the local access (and thus <1ms latency), so it had a ton of N+1 everywhere.
This made it almost impossible to migrate/transition to another configuration.
So, if this style of app hosting doesn't take off and you're at all worried about this being a dead end storage once you reach a certain scale, I'd recommend not doing this, otherwise your options will be very limited.
Then again - I bet you could get very very far on a single box, so maybe it'd be a non factor! :)