- As somebody that "learned" C++ (Borland C++... the aggressively blue memories...) first at a very young age, I heartily agree.
Rust just feels natural now. Possibly because I was exposed to this harsh universe of problems early. Most of the stupid traps that I fell into are clearly marked and easy to avoid.
It's just so easy to write C++ that seems like it works until it doesn't...
- > the options are to build more software or to hire fewer engineers.
To be cheeky, there are at least three possibilities you are writing off here: we build _less_ software, we hire _more_ engineers, or things just kinda stay the same.
More on all of these later.
> I am not convinced that software has a growing market
Analysis of market dynamics in response to major technological shocks is reading tea leaves. These are chaotic systems with significant nonlinearities.
The rise of the ATM is a classic example. An obvious but naive predicted result would be fewer employed bank tellers. After all, they're automated _teller_ machines.
However, the opposite happened. ATMs drastically reduced the cost of running a bank branch (which previously required manually counting lots of cash). More branches, fewer tellers per branch... but the net result was _more_ tellers employed thirty years later. [1]
They are, of course, now doing very different things.
Let's now spitball some of those other scenarios above:
- Less "software" gets written. LLMs fundamentally change how people interact with computers. More people just create bespoke programs to do what they want instead of turning to traditional software vendors.
- More engineers get hired. The business of writing software by hand is mostly automated. Engineers shift focus to quality or other newly prioritized business goals, possibly enabled by automating LLMs instead of e.g traditional end to end tests.
- Things employment and software wise stay mostly the same. If software engineers are still ultimately needed to check the output of these things the net effect could just be they spend a bit less time typing raw code. They might work a bit less; attempts to turn everyone into a "LLM tech lead" that manages multiple concurrent LLMs could go poorly. Engineers might mostly take the efficiency gains for themselves as recovered free-ish (HN / Reddit, for example) time.
Or, let's be real, the technology could just mostly be a bust. The odds of that are not zero.
And finally, let's consider the scenario you dismiss ("more software"). It's entirely possible that making something cheaper drastically increases the demand for it. The bar for "quality software" could dramatically raise due to competition between increasingly llm-enhanced firms.
I won't represent any of these scenarios as _likely_, but they all seem plausible to me. There are too many moving parts in the software economy to make any serious prediction on how this will all pan out.
1. https://www.economist.com/democracy-in-america/2011/06/15/ar... (while researching this, I noticed a recent twist to this classic story. Teller employment actually _has_ been declining in the 2020s, as has the total number of ATMs. I can't find any research into this, but a likely culprit is yet another technological shock: the rise of mobile banking and payment apps)
- The most critical skill in the coming era, assuming that AI follows its current trajectory and there are no research breakthroughs for e.g. continual learning is going to be delegation.
The art of knowing what work to keep, what work to toss to the bot, and how to verify it has actually completed the task to a satisfactory level.
It'll be different than delegating to a human; as the technology currently sits, there is no point giving out "learning tasks". I also imagine it'll be a good idea to keep enough tasks to keep your own skills sharp, so if anything kinda the reverse.
- > Sometimes after a night’s sleep, we wake up with an insight on a topic or a solution to a problem we encountered the day before.
The current crop of models do not "sleep" in any way. The associated limitations on long term task adaptation are obvious barriers to their general utility.
> When conversing with LLMs, I never get the feeling that they have a solid grasp on the conversation. When you dig into topics, there is always a little too much vagueness, a slight but clear lack of coherence, continuity and awareness, a prevalence of cookie-cutter verbiage. It feels like a mind that isn’t fully “there” — and maybe not at all.
One of the key functions of REM sleep seems to be the ability to generalize concepts and make connections between "distant" ideas in latent space [1].
I would argue that the current crop of LLMs are overfit on recall ability, particularly on their training corpus. The inherent trade-off is that they are underfit on "conceptual" intelligence. The ability to make connections between these ideas.
As a result, you often get "thinking shaped objects", to paraphrase Janelle Shane [2]. It does feel like the primordial ooze of intelligence, but it is clear we still have several transformer-shaped breakthroughs before actual (human comparable) intelligence.
1. https://en.wikipedia.org/wiki/Why_We_Sleep 2. https://www.aiweirdness.com/
- Not really, no. The founders were not omniscient, but many of them publicly wrote about the problematic rise of political "factions" contrary to the general interest: https://en.wikipedia.org/wiki/Federalist_No._10
- > One thing that's been really off putting about the technology industry is how fake-it-till-you-make-it has become so pervasive.
It feels accidental, but it's definitely amusing that the models themselves are aping this ethos.
- The grid actually already has a fair number of (non-software) circular dependencies. This is why they have black start [1] procedures and run drills of those procedures. Or should, at least; there have been high profile outages recently that have exposed holes in these plans [2].
1. https://en.wikipedia.org/wiki/Black_start 2. https://en.wikipedia.org/wiki/2025_Iberian_Peninsula_blackou...
- Sure. Create a diamond polygon and revolve it around a point.
Blender has methods and tools to _approximate_ doing this. It has a revolve tool... where the key parameter is the number of steps.
This is not a revolution, it's an approximation of a revolution with a bunch of planar parts.
BREP as I understand it allows you to describe the surfaces of this operation precisely and operate further on them (e.g. add a fillet to the top edge).
Ditto for things like circular holes in objects. With blender, you're fundamentally operating on a bunch of triangles. Fundamental and important solid operations must be approximated within that model.
BREP has a much richer set of primatives. This dramatically increases complexity but allows it to precisely model a much larger universe of solids.
(You can kinda rebuild functionality that geometric kernels have with geometry nodes now in blender. This is a lot of work and is not a great user interface compared to CAD programs)
- An analogy is the difference between vector and bitmap graphics.
CAD programs aren't just a different set of operations on the same data, they use an entirely different representation (b-rep [1] vs Blender's points, vertices, and polygons).
These representations are much more powerful but also much more complex to work with. You typically need a geometric kernel [2] to perform useful operations and even get renderable solids out of them.
So sure, I suppose you could build all of that into Blender. But it's the equivalent of building an entire new complex program into an existing one. It also raises major interoperation issues. These two representations do not easily convert back and forth.
So at that point, you basically have two very different programs in a trenchcoat. So far the ecosystem has evolved towards instead building two different tools that are masters of their respective domains. Perhaps because of the very different complexities inherent in each, perhaps because it makes the handover / conversion from one domain to the other explicit.
1. https://en.m.wikipedia.org/wiki/Boundary_representation
2. https://en.m.wikipedia.org/wiki/Geometric_modeling_kernel
- This doesn't seem right to me. From the article I believe you are referencing ("What if AI made the world’s economic growth explode?"):
> If investors thought all this was likely, asset prices would already be shifting accordingly. Yet, despite the sky-high valuations of tech firms, markets are very far from pricing in explosive growth. “Markets are not forecasting it with high probability,” says Basil Halperin of Stanford, one of Mr Chow’s co-authors. A draft paper released on July 15th by Isaiah Andrews and Maryam Farboodi of mit finds that bond yields have on average declined around the release of new ai models by the likes of Openai and DeepSeek, rather than rising.
It absolutely (beyond being clearly titled "what if") presented real counterarguments to its core premise.
There are plenty of other scenarios that they have explored since then, including the totally contrary "What if the AI stock market blows up?" article.
This is pretty typical for them IME. They definitely have a bias, but they do try to explore multiple sides of the same idea in earnest.
- > Understanding twos complement representation is an essential programming skill
The field of programming has become so broad that I would argue the opposite. The vast majority of developers will never need to think about let alone understand twos complement as a numerical representation.
- What if the obstacle is not a person? What if something falls off a truck in front of the vehicle? What if wildlife spontaneously decides to cross the road (a common occurrence where I live)?
I don't think these problems can just be assumed away.
- This is an interesting question where I do not know the answer.
I will not pretend to be an expert. I would suggest that "human understanding convenience" is pretty important in safety domains. The famous Brian Kernighan quote comes to mind:
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
When it comes to obscure corner cases, it seems to me that simpler is better. But Waymo does seem to have chosen a different path! They employ a lot of smart folk, and appear to be the state of the art for autonomous driving. I wouldn't bet against them.
- > Cars can stop in quite a short distance.
"Quite a short distance" is doing a lot of lifting. It's been a while since I've been to driver's school, but I remember them making a point of how long it could take to stop, and how your senses could trick you to the contrary. Especially at highway speeds.
I can personally recall a couple (fortunately low stakes) situations where I had to change lanes to avoid an obstacle that I was pretty certain I would hit if I had to stop.
- > They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two.
Lots of safety critical systems actually do operate by "voting". The space shuttle control computers are one famous example [1], but there are plenty of others in aerospace. I have personally worked on a few such systems.
It's the simplest thing that can obviously work. Simplicity is a virtue when safety is involved.
You can of course do sensor fusion and other more complicated things, but the core problem I outlined remains.
> If you are so worried, override the AI in the moment.
This is sneakily inserting a third set of sensors (your own). It can be a valid solution to the problem, but Waymo famously does not have a steering wheel you can just hop behind.
This might seem like an edge case, but edge cases matter when failure might kill somebody.
1. https://space.stackexchange.com/questions/9827/if-the-space-...
- > With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios).
"A man with a watch always knows what time it is. If he gains another, he is never sure"
Most safety critical systems actually need at least three redundant sensors. Two is kinda useless: if they disagree, which is right?
EDIT:
> If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
What do you do?
- The reality is also that nobody (aside from Mark "I Want To Buy a State of the Art AI Research Lab" Zuckerberg) is even offering millions in cold hard cash.
Instead, they're offering something worse: the _chance_ to cash out equity that _might_ be worth that at _some_ point in the future.
Versus spending time with my kid right now. Or any of the hundreds of other more enjoyable things I can do with my time.
They're dangling a lottery ticket in front of us. I've seen the end of that movie several times myself now; enough to know the odds are long.
So yeah: no thanks.
- The advice I've seen with delegation is the exact opposite. Specifically: you can't delegate what you can't do.
Partially because of all else fails, you'll need to step in and do the thing. Partially because if you can't do it, you can't evaluate whether it's being done properly.
That's not to say you need to be _as good_ at the task as the delegee, but you need to be competent.
For example, this HBR article [1]. Pervasive in all advice about delegation is the assumption that you can do the task being delegated, but that you shouldn't.
> Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job.
I think the CEO role is actually the outlier here.
I can only speak to engineering, but my understanding has always been that VPs need to be able to manage individual teams, and engineering managers need to be somewhat competent if there's some dev work that needs to be done.
This only happens as necessary, and it obviously should be rare. But you get in trouble real quickly if you try to delegate things you cannot accomplish yourself.
- > No, they are deliberately designed to mimic human communication via language, not human thought.
My opinion is that language is communicated thought. Thus, to mimic language, at least really well, you have to mimic thought. At some level.
I want to be clear here, as I do see a distinction: I don't think we can say these things are "thinking", despite marketing pushes to the contrary. But I do think that they are powerful enough to "fake it" at a rudimentary level. And I think that the way we train them forces them to develop this thought-mimicry ability.
If you look hard enough, the illusion of course vanishes. Because it is (relatively poor) mimcry, not the real thing. I'd bet we are still a research breakthrough or two away from being able to simulate "human thought" well.
For this to be a "classic motte and bailey" you will need to point us to instances where _the original poster_ suggested these (the "bailey", which you characterize as "rust eliminates all bugs") things.
It instead appears that you are attributing _other comments_ to the OP. This is not a fair argumentation technique, and could easily be turned against you to make any of your comments into a "classic motte and bailey".