Preferences

I could see it being the case that driving is a fairly general problem, and this models intentionally designed to be general end up doing better than models designed with the misconception that you need a very particular set of driving-specific capabilities.

shakna
Driving is not a general problem, though. Its a contextual landscape of fast-based reactions and predictions. Both are required, and done regularly by the human element. The exact nature of every reaction, and every prediction, change vastly within the context window.

You need image processing just as much as you need scenario management, and they're orthoganol to each other, as one example.

If you want a general transport system... We do have that. It's called rail. (And can and has been automated.)

TeMPOraL
It partially is. You have the specialized part of maneuvering a fast moving vehicle in physical world, trying to keep it under control at all times and never colliding with anything. Then you have the general part, which is navigating the human environment. That's lanes and traffic signs and road works and schoolbuses, that's kids on the road and badly parked trailers.

Current breed of autonomous driving systems have problems with exceptional situations - but based on all I've read about so far, those are exactly of the kind that would benefit from a general system able to understand the situation it's in.

tshaddox OP
Yes, that’s exactly what I meant. I’d go even further and say the hard parts of driving are the parts where you are likely better off with a general model. And it’s not just signs, construction, police stopping traffic, etc. Even just basic navigation amongst traffic seems to require a general model of the other nearby drivers. It’s important to be able to model drivers’ intentions, and also to drive your own car in a predictable manner.
melvinmelih
> Driving is not a general problem, though.

But what's driving a car? A generalist human brain that has been trained for ~30 hours to drive a car.

shakna
Human brain's aren't generalist!

We have multiple parts of the brain that interact in vastly different ways! Your cerebellum won't be running the role of the pons.

Most parts of the brain cannot take over for others. Self-healing is the exception, not the rule. Yes, we have a degree of neuroplasticity, but there are many limits.

(Sidenote: Driver's license here is 240 hours.)

> We have multiple parts of the brain that interact in vastly different ways!

Yes, and thanks to that human brains are generalist

shakna
Only if that was a singular system, however, it is not. [0]

For example... The nerve cells in your gut may speak to the brain, and interact with it in complex ways we are only just beginning to understand, but they are separate systems that both have control over the nervous system, and other systems. [1]

General Intelligence, the psychological theory, and General Modelling, whilst sharing words, share little else.

[0] https://doi.org/10.1016/j.neuroimage.2022.119673

[1] https://doi.org/10.1126/science.aau9973

yusina
240 hours sounds excessive. Where is "here"?
> Human brain's aren't generalist!

What? Human intelligence is literally how AGI is defined. Brain’s physical configuration is irrelevant.

shakna
A human brain is not a general model. We have multiple overlapping systems. The physical configuration is extremely relevant to that.

AGI is defined in terms of "General Intelligence", a theory that general modelling is irrelevant to.

anythingworks
exactly! I think that was tesla's vision with self-driving to begin with... so they tried to frame it as problem general enough, that trying to solve it would also solve questions of more general intelligence ('agi') i.e. cars should use vision just like humans would

but in hindsight looks like this slowed them down quite a bit despite being early to the space...

mannicken
Speed and Moore's law. You don't need to just make a decision without hallucinations, you need to do it fast enough for it to propagate to the power electronics and hit the gas/brake/turn the wheel/whatever. Over and over and over again on thousands of different tests.

A big problem I am noticing is that the IT culture over the last 70 years has existed in a state of "hardware gun get faster soon". And over the last ten years we had a "hardware cant get faster bc physics sorry" problem.

The way we've been making software in the 90s and 00s just isn't gonna be happening anymore. We are used to throwing more abstraction layers (C->C++->Java->vibe coding etc) at the problem and waiting for the guys in the fab to hurry up and get their hardware faster so our new abstraction layers can work.

Well, you can fire the guys in the fab all you want but no matter how much they try to yell at the nature it doesn't seem to care. They told us the embedded c++-monkeys to spread the message. Sorry, the moore's law is over, boys and girls. I think we all need to take a second to take that in and realize the significance of that.

[1] The "guys in the fab" are a fictional character and any similarity to the real world is a coincidence.

[2] No c++-monkeys were harmed in the process of making this comment.

This item has no comments currently.