- kjuulhFrom the provided question by WHR I can definitely see how Scandinavian countries rank so high. Being Danish myself my answer would immediately go into long term thinking and whether I would have a better life elsewhere and to me the answer is a clear no. Not financially, socially or politically. So yes, Denmark scores really high, but is it really measuring happiness. I dont know. That said I dont think measuring how often we laugh as a better metric amongst other thing, I can be perfectly "happy" without being outwardly joyous, maybe contentment is a better word. Or well being. But it isn't as catchy i guess
- After self hosting our builds ended up so fast, that we were actually waiting for was GitHub scheduling our agents, rather than it being the job running. It sucked a bit, because we'd optimized it so much, but we on 90th percentile saw that it took 20-30 seconds for github to schedule the jobs as they should. Measured from when the commit hit the branch, to the webhook begin sent.
- We choose github actions because it was tied directly to github providing the best pull-request experience etc. We actually didn't really use github actions templating as we'd got our own stuff for that, so the only thing github actions actually had to do was start, run a few light jobs as the CI was technically run elsewhere and then report the final status.
When you've got many 100s of services managing these in actions yaml itself is no bueno. As you mentioned having the option to actually be able to run the CI/CD yourself is a must. Having to wait 5 minutes plus many commits just to test an action drains you very fast.
Granted we did end up making the CI so fast (~ 1 minute with dependency cache, ~4 minutes without), that we saw devs running their setup less and less on their personal workstations for development. Except when github actions went down... ;) We used Jenkins self-hosted before and it was far more stable, but a pain to maintain and understand.
- It is xD On the outside it feels like a product held together with duct tape, wood glue and prayers.
- We've self-hosted github actions in the past, and self-hosting it doesn't help all that much with the fragile part. For github it is just as much triggering the actions as it is running them. ;) I hope the product gets some investment, because it has been unstable for such a long time, that on the inside it must be just the usual right now. GitHub has by far the worst uptime of any SaaS tools we use at the moment, and it isn't even close.
> Actions is down again, call Brent so he can fix it again...
- Sounds good Solomon I look forward to seeing how it goes in about a year when I am going to tackle our CI again ;)
Best of luck and thx for taking my harsh feedback in strides!
- As someone that has used Dagger a lot (a previous daggernaut / ambassador dropped off after LLMs was announced, and was changing jobs at the time. implemented it at a previous company across 95% of services, built the rust sdk) the approach was and is amazing for building complex build chains.
It serves a place where a dockerfile is not enough, and CI workflows are too difficult to debug or reason about.
I do have some current problems with it though:
1. I don't care at all about the LLM agent workflows, I get that it is possible, but the same people that chose dagger for what it was, is not the same audience that runs agents like that. I can't choose dagger currently, because I don't know if they align with my interests as an engineer solving a specific problems for where I work (delivering software, not running agents).
2. I advocated for modules before it was a thing, but I never implemented it. It is too much magic, I want to write code, not a DSL that looks like code, dagger is already special in that regard, to modules takes it a step too far. You can't find the code in their docs anymore, but dagger can be written with just a .go, .py or .rs file. Simply take in dagger as a dependency and build your workflow.
3. Too complex to operate, dagger doesn't have runners currently, and it is difficult to run a setup in production for CI yourself, without running it in the actions themselves, which can be disastrous for build times, as dagger often leads you into using quite a few images, so having a cache is a must.
Dagger needs to choose and execute; not having runners, even when we we're willing to throw money at them was a mistake IMO. Love the tool, the team, the vision but it is too distracted, magical and impatient to pick up at the moment.
- It did happen, and cloudflare should learn from it, but not just the technical reasons.
Instead of focusing on the technical reasons why, they should answer how such a change bubbled out to cause such a massive impact instead.
Why: Proxy fails requests
Why: Handlers crashed because of OOM
Why: Clickhouse returns too much data
Why: A change was introduced causing double the amount of data
Why: A central change was rolled out immediately to all cluster (single point of failure)
Why: There are exemptions or standard operating procedure (gate) for releasing changes to the hot path for cloudflares network infra.
While the Clickhouse change is important, I personally think it is crucial that Cloudflare tackles the processes, and possibly gates / controls rollout for hot path system, no matter what kind of change they are when they're at that scale it should be possible. But that is probably enough co-driving. It to me seems like a process issue more than a technical one.
- It depends on the infrastructure you're running on. There was a post yesterday going fairly into depth how you do such calculations https://authress.io/knowledge-base/articles/2025/11/01/how-w...
You probably cannot achieve this with a single node, so you'll at least need to replicate it a few times to combat the normal 2-3 9s you get from a single node. But then you've got load balancers and dns, which can also serve as single point of failure, as seen with cloudflare.
Depending on the database type and choice, it varies. If you've got a single node of postgres, you can likely never achieve more than 2-3 9s (aws guarantees 3 9s for a multi-az RDS). But if you do multi-master cockroach etc, you can maybe achieve 5 9s just on the database layer, or using spanner. But you'll basically need to have 5 9s which means quite a bit of redundancy in all the layers going to and from your app and data. The database and DNS being the most difficult.
Reliable DNS provider with 5 9s of uptime guarantees -> multi-master load balancer each with 3 9s, -> each load balancer serving 3 or more apps each with 3 9s of availability, going to a database(s) with 5 9s.
This page from google shows their uptime guarantees for big tables, 3 9s for a single region with a cluster. 4 9s for multi cluster and 5 9s for multi region
https://docs.cloud.google.com/architecture/infra-reliability...
In general it doesn't matter really what you're running, it is all about redundancy. Whether that is instances, cloud vendor, region, zone etc.
- I had a brief look at greptime db. And I'd like to give a little bit of feedback on your funnel. It is clear that your product marketing is targeting business folks rather than developers. That 3 minute vid on the frontpage was next to useless for me. Also very clearly AI.
Having stats is nice but i am not choosing your product because of stats. I actually think greptimedb is exactly what I am looking for, I.e. a humio / falcon logscale alternative. But I had to do some digging to actually infer that.
Your material doesn't highlight what sets you apart from the competition. If you want to target developers which you might not. I dont know.
I want to debug issues using freetext search, i want to be able to aggregate stats i care about on demand.
- YouTube even has a feature now that if you skip, it will skip over sections that other people skip to. Which in practice does the same thing as sponsorblock, except that you have to press skip ;)
- There is always a "better" thing. I do think that it is fine to have a bit of stability in the frontend space. Should react stay the default for the future, probably not, but it is fine if it stays that way for a while.
React is a good enough choice for a lot of problems, heck, going without a framework is often a good enough choice, we don't always have to choose the "best" option, because what we value might not actually be that important, over other important metrics. Signals might have performance, elm elegance and purity, etc, etc. But for 95% problems, and teams React is just fine.
A bonus is that I can come back to my project in a year, and not have to rewrite it because everything changed since then.
In Danish we say
> Stop mens legen er god
Stop, while you're still going strong (ish). React is plenty equipped to solve a lot of problems, it doesn't need to solve all of them.
- Chezmoi has been a blessing to use. It is one of the only tools I've used that had been able to survive me neclecting it for months and then getting back to it. I'd love a more interactive diff when my dotfiles have driften too much. But otherwise it is perfect for my needs.
- I was working at a small farm-shop at some point, we sold smoothies of turmeric and ginger, we had to label it clearly, and restrict sale for pregnant woman, young kids and the elderly because large doses can be dangerous. As far as I recall both are a natural blood thinner.
Edit: in europe
- I felt the same when implementing OpenID connect flows according to spec. It uses the browser in creative ways ;) Especially the device flow, absolutely insane complexity for what it is.
- I actually wanted to ask you about this at our last meetup (Rust Aarhus), so nice to see it on hackernews. It did seem you switched away from flutter. ;)
How is shipping egui apps vs flutter. I'd imagine that especially shipping a rust integration with Flutter might be a bit of a pain
- It is weird, this is their main website: https://hypr.land/
Which has a demo. This website seems to only be for the account / payments.
- I've got absolutely no problem with them taking donations / having a premium option. But it is very difficult to comment on when there is no indication what the premium experience brings.
- You can rent bigger runners from github. They're still not as fast as third party ones, but it takes 5 minutes to set up and is still pay as you go. I just see a lot of people use the default ones, which are very small.
- Could Rust be faster, yes. But honestly, for our use-case shipping; tools, services, libraries and what have you in production, it is plenty fast. That said, Rust definitely falls off a cliff once you get to a very large workspace (I'd say plus 100k lines of code it begins to snowball), but you can design yourself out of that, unless you build truly massive apps.
Incremental builds doesn't disrupt my feedback loop much, only when paired with building for multiple targets at once. I.e. Leptos where a wasm and native build is run. Incremental builds do however, eat up a lot of space, a comical amount even. I had a 28GB target/ folder yesterday from working a few hours on a leptos app.
One recommendation is to definitely upgrade your CI workers, Rust definitely benefits from larger workers than the default GitHub actions runners as an example.
Compilling a fairly simple app, though including DuckDB which needs to be compiled, took 28 minutes on default runners. but on a 32x machine, we're down to around 3 minutes. Which is fast enough that it doesn't disrupt our feedback loop.