- 2 points
- You’re severely overestimating the mental capacity of a large section of the population.
In fact, almost the entire population after little sleep or on a bad day is likely to make mistakes while following your proposed scheme.
- Look forward to this post being flagged
- Yes, Your Honor, I did convince this teenager to kill herself - but 150 people a year die from coconuts!
- Okay, but if you returned a wrapped error it’d at least be easier to debug.
- An unwrap like that in production code on the critical path is very surprising to me.
I haven’t worked in Rust codebases, but I have never worked in a Go codebase where a `panic` in such a location would make it through code review.
Is this normal in Rust?
- vlang is pretty much this
- It returns the zero value for all types, including arrays (which is nil).
nil is equivalent to the empty array, which is why the rest of the code works as it does.
- > But it's not easier at all, and learning curve just moves to another place.
Hard disagree. Go has its sharp corners, but they don’t even approach the complexity of the borrow checker of Rust alone, let alone all of the other complexity of the Rust ecosystem.
- The fact that certain specific data centres are being proposed or built in areas with water issues may be bad, but it does not imply that all AI data centres are water guzzling drain holes that are killing Earth, which is the point you were (semi-implicitly) making in the article.
- The value prop here is for existing projects in C or C++, as is made abundantly clear in the linked article
- > I suspect that LLMs are better at classifying novel vs junk papers than they are at creating novel papers themselves.
Doubt
LLMs are experts in generating junk. And generally terrible at anything novel. Classifying novel vs junk is a much harder problem.
- Most phishing emails are so bad, it’s quite terrifying when you see a convincing one like this.
Email is such an utter shitfest. Even tech-savvy people fall for phishing emails, what hope do normal people have.
I recommend people save URLs in their password managers, and get in the habit of auto-filling. That way, you’ll at least notice if you’re trying to log into a malicious site. Unfortunately, it’s not foolproof, because plenty of sites ask you to randomly sign into different URLs. Sigh…
- Stopped reading after realising this is written by ChatGPT
- Being a founder is a completely different situation which the article is explicitly not talking about.
Although, frankly, even as a founder, 100-hour 7-day weeks aren’t right for the vast majority of people. Clearly it worked for you, which is great, but 99% of people do not have that level of energy, and furthermore are mentally unable to withstand the sacrifices such a schedule imposes on other aspects of life.
- What’s the point in having a house if you only spend one day in it, which realistically you will spend doing chores and sleeping?
- Maybe it’s just me having low energy levels, but for me, I can’t fathom working 996 while continuing to do focused and deep work consistently.
At the moment I work 9-5, a few meetings per day, so maybe 5-6 hours focused work, and I’m mentally exhausted by the end.
- “I'm 3x more creative at home than in the office”
I think you mean “productive” (and even that would be arguable).
- > What’s going to happen in the next 5 years? Will my skills be relevant? How do I truly add value with AI getting smarter in every way? How does it change life for me and my family?
Here is how I approach this - this might be a coping mechanism, but it’s certainly helped me personally.
LLMs are hugely impressive, no-one is denying that. But they have already been out for nearly 3 years at this point, and there are still massive gaps in their functionality that mean, in their current state, they are nowhere near being able to take over highly skilled work (e.g. software engineering at senior+ levels) from humans. They can handle grunt work well, but are unable to go beyond that: they operate at the level of a (poor) junior.
LLMs remind me to some extent of journalists and the Gell-Mann Amnesia effect. When I ask an LLM a question in an area I am not an expert in, I’m impressed. On the other hand, when I ask it something I am an expert in, I am almost always disappointed. I don’t bother using them for complex work any more, because they lie and hallucinate too much.
Furthermore, the rate of progress appears to be slowing.
All of this gives me some hope that my own life will not be completely ruined as a result of this technology.
- Into my heart, an air that kills, from yon far country blows. What are those blue remembered hills? What spires, what farms are those? That is the land of lost content. I see it shining plain. Those happy highways, where I went, and cannot go again.
- pping already exists too. It’s a bit of a crowded market…
- I feel like Rust just isn’t stable or mature enough as a language for moving Linux towards it to make sense.
Currently feels like you need a PhD in programming languages to use it effectively. Feels like Haskell in many ways.
- You are the one instigating the nerdfight
- Current-gen AI can write obvious code well, but fails at anything that involves complexity or subtlety in my experience
- This. Great, AI can produce code. But it produces code without inducing understanding of the code in the person who wrote (or rather supervised the production of) it, which is half the point.
At some point AI will probably be good enough that this won’t matter. But it feels like we’re still a long way off that.
- Seek professional help
I never said otherwise. I said that many are incapable, not most