- Honest question. Why is a Rust rewrite of coreutils getting traction? Nobody thought it’s a good idea to rewrite coreutils with Go, Java, Python, C++, etc etc. It can’t just be memory safety.
- Given the site where this is posted and the screenshot, is the author an engineer turned fiction writer? Kudos if true. Posting these must take a lot of courage.
- As a former employee, the engineering culture at Google gives me old-school hacker vibes, so users are very much expected to “figure it out” and that’s somewhat accepted (and I say this with fond memories). It’s no surprise the company struggles with good UX.
- LLMs are good at in-distribution programming, so inventing a new language just for them probably won’t work much better than languages they were already trained on.
If you could invent a language that is somehow tailored for vibe coding _and then_ produce a sufficient high quality corpus of it to train the AI on them, that would be something.
- This is painfully clear once you’ve worked in a non-first-world country. Everybody is just working a job.
- > The guy is a coder through and through.
I’d be proud if someone says that about me one day. Hope Mitchell will share the sentiment.
- Some juniors do figure it out, but my experience has been that the bar for such juniors is a lot higher than pre-AI junior positions, so there is less opportunity for junior engineers overall.
- It’s possible to build this around protobuf. Google has a rich internal protobuf ecosystem that does this and supports querying large amounts of protobuf data without specifying schemas. They are only selectively open sourced. Have a look at riegeli if you are interested.
- I don’t understand this argument. It seems to originate from capnp’s marketing. Capnp is great, but the fact that protobuf can’t do zero copy should be more an academic issue than practical. Applications that want to use a schema always needs their own native types that serialize and deserialize from binary formats. For protobuf you either bring your own or use the generated type. For capnp you have to bring your own. So a fair comparison of serialization cost would compare:
native > pb binary > native
vs
native > capnp binary > native
If you benchmark this, the two formats are very close. Exact perf depends on payload. Additionally, one could write their own protobuf serializer with protoc they really need to.
- I like this (paid) blog for its technical dives and HFT expertise, but the programing language opinions are a little click-baity and not worth the time.
- Not giving up the address space feels like an anti feature. This would mean, among other things, that access to the DONTNEED memory is no longer a segfault but garbage values instead, which is not ideal.
- Agree with the general point. I’d maybe add that a lot of times it’s the scale of the profit that makes something a net negative for humanity, not the percentage based margin. A lot big tech started small and in the early stages created a ton of positive value, sometimes with a respectable margin, but once they are at billions of market capitalization and starts chasing profits for investors, the positive societal value gets eroded.
- And another party for people who sign their emails with 3-letter usernames? :)
- I think the reason going into music is that way because the industry is already hyper competitive as is, and the “selection” process for talent shifts earlier. Perhaps CS will go that way eventually.
Anyway, when I made the comment, I was thinking it should be an elective and intended for people who either aren’t that familiar with Linux or want to become even more comfortable with it. There are certainly plenty of such students in my experience, myself included when I was in college.
Also just to be clear, this shouldn’t be just about “being able to run Linux at home” level of material, but things like writing non trivial applications using Linux subsystems and being able to troubleshoot them.
- There should be a course on Linux. Not your typical operating systems course where you write a toy OS and teach a bunch of theory, but rather a deep dive into various Linux subsystems, syscalls, tooling, etc.
- I probably should’ve qualified the “best medium” with something more specific. But I’ll submit two reasons why email is best for me and maybe some others:
- Email is the one thing that isn’t tied to any platform and ~always works, so it’s worth it to put in some effort into managing subscriptions / filters / labels / etc knowing that they will pay off indefinitely.
- It’s nice to consume content in the original format intended by the author, so I prefer receiving an article link in the email with a preview, and clicking through to read it. A dedicated reader invariably has problems rendering non-text content and doesn’t have all the features of a browser.
- Not a user (yet) but just want to say I concur that email is the best medium for RSS feeds, so kudos.
- I think the article glossed over a bit about how to interpret the table and the formula. The formula is only correct if you take into account the memory hierarchy, and think of N as the working set size of an algorithm. So if your working set fits into L1 cache, then you get L1 cache latency, if your working set is very large and spills into RAM, then you get RAM latency, etc.
I particularly like the “what to do for flat profiles” ad “protobuf tips” sections. Similar advice distilled to this level is difficult to find elsewhere.