- > 3.4 Lazy fsync by Default
Why? Why do some databases do that? To have better performance in benchmarks? It’s not like that it’s ok to do that if you have a better default or at least write a lot about it. But especially when you run stuff in a small cluster you get bitten by stuff like that.
- sadly I hoped they add:
> Off-site replication of guests for manual recovery in case of datacenter failure.
which would've been an actual killer feature
- I don’t think that http/3 is easier to implement than http/1.1 especially since h3 is stateful where http/1.1 is not. Especially not when everything should be working correctly and securely because the spec does not always tell about these things. Oh and multiplexing is quite a hard thing to do especially when you are also dealing with a state machine and each of your clients can be malicious.
- As far as I know <b> and <i> is not deprecated at all. It’s just not recommended for 95% of the use cases.
- You are joking, right? Writing c with llm‘s… what can go wrong?!
- ingress-nginx is older than 5-7 years tough. In that time frame you would’ve needed to update your Linux system, which gets hairy most often as well. The sad thing is just that the replacement is just not there and gateway api has a lot of drawbacks that might get fixed in the next release (working with cert manager)
- Uf that also means that the business api is affected? to pull invoices and stuff. https://developer-docs.amazon.com/amazon-business/docs/downl... not sure if that will be paid as well
- Gcp still can’t change our street address because of the d-u-n-s validation (of course d-u-n-s actually uses our new address… and all other vendors are fine with it). How bad must their service be that they can’t change a fucking address. Oh and the free billing support is horrible, always the same response like ‘a special team is working on it’.. yeah sure and they can’t fix an address for like a month. It’s worse since all our invoices use the old address which in Germany is a fucking problem. Time to make a migration plan.
- I don’t think the language is unprofessional, it’s direct and it states his opinion.
The one demanding it is the maintainer of keepassxc it would’ve been better to just close the issue that this is a Debian only problem and he should install it like that and just close it.
- Just reading the blog post makes me wonder if they really saved 500k. How long did it take them to build the solution, how many people built it, how much does the new service cost while it’s running? How many ops ours does the new service need? The blog goes into many details that I doubt that they built it in less than a month. So maybe they will save money over years but they probably lost money while building the new solution
- Some hyperscalers even have services for that. Which even makes it possible to have cross cluster ingress. And other things. And it makes it possible to have multiple cluster ingress different regions that somewhat work together.
- 2 points
- Most camping places in France/spain that do have cee or type e/f might limit to 6a/10a. Fyi. It’s mostly to limit power draw since you pay a flat fee.
- California is basically the Camping variant of the earlier t3 and nowadays multivan (t7). It has a big history and used different frames (t4-t5) in the past. But it’s not a us thing. The first version was really close to a us only Model tough.
- The vocoder extension does not contain any affected packages, it‘s just misleading
- I do not like these coverages. They always write about VSCode Extension which has basically nothing todo with the bug.
It only did run affected programs of course but it's so stupid to even talk about vscode in that case. if you used the affected nx versions you are affected no matter if you used vscode,webstorm, whatever ide of your liking. if you used a not affected nx version nothing happend no matter which vscode version you used.
- most of the time the "far more complex setup" is mostly easier than the reimplementation of kubernetes with ansible.
- many ppl also understimate how complex it is to satisfy uptime requirements, how to scale out local infrastructure when storage > 10/50/100tb (yeah a single disk can handle that, but what about bit rot, raid stuff, etc) is involved.
it gets worse when you need more servers because your ocr process of course needs cpu x so on a beefiy machine you can handle maybe 50 high page documents. but how do you talk to other machines, etc.
also humans costs way more money than cloud stuff. I the cloud stuff can be managed in like 1 day per month you dont need a real person, if you have real hardware that day is not enough and you soon need a dedicated person, keeping everything up-to-date, etc.
- how can you build your containers in parallel?
over multiple machines? I'm not sure that a sh script can do that with github
I mean yes you can build it with native interop and aot. But then you would loose the .net benefits as well.