- osigurdsonI just use Podman's Kubernetes yaml as a compose substitute when running everything locally. This way it is fairly similar to production. Docker compose seems very niche to me now.
- Often you would study this type of material in Electrical or Computer Engineering.
- Sure! Feel free to check out Nthesis / download the CLI and give it a spin if you like. https://nthesis.ai. Happy to discuss what it does / how it works.
- https://nthesis.ai/public/hn-working-on
A tool for searching, filtering and chatting with the "What are you working on?" posts. Also has a visual map (UMAP) that clusters similar things together. Useful if you want to find specific things or better understand themes.
- I've had one situation in which I was able to get a 20X productivity boost. I needed to do a large number of similarish things. I carefully hand coded one, created an AGENTS.md that explained everything in a lot of detail and then went to town on generation.
Everything went well and I knew what to expect so reviewing the code was quick.
The experience was super great. I was a 20X AI boosted engineer for a bit. But, I haven't had that situation again.
Anyway, I would say, try to find areas of the code that fit this model if you can. AI is phenomenal for this use case.
- https://nats.io
Not a drop in replacement, but worth looking at.
- I've always found this sound, rational ROI driven approach to product management a little off the mark. Software isn't like real estate or investing in TBills - you don't invest $X in development and get a nice 10% annualized return on investment over the next 10 years or something like that despite how seductive such thinking can be.
It is largely a "hits" business where 1% of the activities that you do result in 99% of the revenues. The returns are non-linear so there should be almost no focus on the input estimation. If your feature only makes sense if it can be done in 3 months but doesn't make economic sense if it takes > 6 months - delete feature.
- It will entirely be about trust. I don't think fakery is worth it for any company with > $1B market cap as trust is such a valuable commodity. It isn't like we are just going to have a single state broadcaster or something like that (at least, I hope not). However it is going to favour larger / more established sources which is unfortunate as well.
- From 1950 - 2005(ish) there were a small number of sources due to the enormous moat required to become a broadcaster. From 2005 to 2021, you could mostly trust video as the costs of casual fakery were prohibitive. Now that the cost to produce fake videos are near zero, I suspect we will return to a much smaller number of sources (though not as small as in the pre YouTube era).
- Thanks! Yes, if you open the side panel, there is a tags area, you can filter by remote-<region> or onsite-<region>. The LLM riffs a little with these. If there is something specific that you would like, happy to make a tighter instruction.
- 5 points
- He mentioned in an interview Hashicorp was just a corporate entity that he used as a teenager to do some contracting here and there. He and the other founder weren't that keen on using it but the name stuck.
- At least Go didn't take the dark path of having async / await keywords. In C# that is a real nightmare and necessary to use sync over async anti-patterns unless willing to re-write everything. I'm glad Zig took this "colorless" approach.
- In addition to the search tools mentioned above, feel free to use https://nthesis.ai/public/hn-who-is-hiring. It has search (text / semantic), chat and extracts data from alternate viewpoints (e.g. business / role) and allows you to visualize a semantic map of those things. I hope it helps!
- The worst usage of AI is “content dilution” where you take a few bullet points and generate 5 paragraphs of nauseating slop. These days, I would gladly take badly written content from humans filled with grammatical errors and spelling mistakes over that.
- I suspect the text alone would be a lot smaller. Embeddings add a lot - 4K or more regardless of the size of the text.
- At first, I was thinking the same but then realized this is over a full page of code. It isn't an insane rule of thumb at all.
At least we aren't talking about "clean code" level of absurdity here: 5-20 lines with 0 - 2 parameters.
- The objective rate of improvement in programming languages has been slower than virtually every other field. Computers have gotten millions of times faster since the advent of C, but programming languages have arguably gotten maybe 10% better in that span.