Preferences

bcrosby95 parent
I might be wrong, but it seems like some people are misinterpreting what is being said here.

Software 3.0 isn't about using AI to write code. It's about using AI instead of code.

So not Human -> AI -> Create Code -> Compile Code -> Code Runs -> The Magic Happens. Instead, it's Human -> AI -> The Magic Happens.


imiric
So... Who builds the AI?

This is why I think the AI industry is mostly smoke and mirrors. If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.

TeMPOraL
> If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities.

Recursive self-improvement is literally the endgame scenario - hard takeoff, singularity, the works. Are you really saying you're dissatisfied with the progress of those tools because they didn't manage to end the world as we know it just yet?

no_wizard
I don’t believe the technology horizon in the next 5 years is sufficiently developed for recursive self improvement to work so well it requires no human intervention, by which I mean it will hit a limit and the technology will still be a tool not a sentient (near sentient?) thing.

I think there will be a wall hit eventually with this, much like there was with visual recognition in the mid 2010s[0]. It will continue to improve but not exponentially

To be fair I am bullish it will make white collar work fundamentally different but smart companies will use it to accelerate their workforce productivity, reliability and delivery, not simply cut labor to the bone, despite that seemingly being every CEOs wet dream right now

[0]: remember when everyone was making demos and apps that would identify objects and such, and all the facial augmentation stuff? My general understanding is that the tech is now in the incremental improvement stage. I think LLMs will hit the same stage in the near term and likely hover there for quite awhile

TeMPOraL
> I don’t believe the technology horizon in the next 5 years is sufficiently developed for recursive self improvement to work so well it requires no human intervention

I'm personally 50/50 on this prediction at this point. It doesn't feel like we have enough ingredients for end-to-end recursive self-improvement in the next 5 years, but the overall pace is such that I'm hesitant to say it's not likely either.

Still, my reply was to the person who seemed to say they won't be impressed until they see AIs "able to build better versions of themselves" and "exponential improvements of their capabilities" - to this I'm saying, if/when it happens, it'll be the last thing that they'll ever be impressed with.

> remember when everyone was making demos and apps that would identify objects and such, and all the facial augmentation stuff? My general understanding is that the tech is now in the incremental improvement stage.

I thought that this got a) boring, and b) all those advancements got completely blown away by multimodal LLMs and other related models.

My perspective is that we had a breakthrough across the board in this a couple years ago, after the stuff you mentioned happened, and that isn't showing signs of slowing down.

imiric
No, that's not what I'm saying.

The progress has been adequate and expected, save for very few cases such as generative image and video, which has exceeded my expectations.

Before we reach the point where AI is self-improving on its own, we should go through stages where AI is being improved by humans using AI. That is, if these tools are capable of reasoning and are able to solve advanced logic, math, and programming challenges as shown in benchmarks, then surely they must be more capable of understanding and improving their own codebases with assistance from humans than humans could do alone.

My point is that if this was being done, we should be seeing much greater progress than we've seen so far.

Either these tools are intelligent, or they're highly overrated. Which wouldn't mean that they can't be useful, just not to the extent that they're being marketed as.

Eisenstein
> if these tools are capable of reasoning and are able to solve advanced logic, math, and programming challenges as shown in benchmarks

The benchmarks are make of questions that humans created and can answer, and are not composed of anything which a human hasn't been able to answer.

> then surely they must be more capable of understanding and improving their own codebases with assistance from humans than humans could do alone.

I don't think that logic follows. The models have proven that they can have more breadth of knowledge than a single human, but not more capability.

Also, they have no particular insight into their own codebases. They only know what is in their training data -- they can use that to form patterns and solve new problems, but they still only have the that and whatever information is given with the question as base knowledge.

> My point is that if this was being done, we should be seeing much greater progress than we've seen so far.

The point is taken, but I think your reasoning is weak.

> Either these tools are intelligent, or they're highly overrated. Which wouldn't mean that they can't be useful, just not to the extent that they're being marketed as.

I may have missed the marketing you have seen, but I don't see the big AI companies claiming that they are anything but tools that can help humans do things or replace certain human tasks. They do not advertise super human capability in intelligence tasks.

I suspect you are seeing a lot of hype and unfounded expectations, and using that as a basis for a calculation. The formula might be right, but the variables are incorrect.

We have a seen a LOT of progress with AI and language models in the last few years, but expecting them to go from 'can understand language and solve complicated novel problems' to 'making better versions of themselves using solutions that humans haven't been able to come up with yet' is a bit much to expect.

I don't know if one would call them intelligent, but something can be intelligent but at the same time not able to make substantial leaps forward in emerging fields.

imiric
> The benchmarks are make of questions that humans created and can answer, and are not composed of anything which a human hasn't been able to answer.

Sure, but they do it at superhuman speeds, and if they truly can reason and come up with novel solutions as some AI proponents claim, then they would be able to come up with better answers as well.

So, yes, they do have more capability in certain aspects than a human. If nothing else, they should be able to draw from their vast knowledgebase in ways that a single human never could. So we should expect to see groundbreaking work in all fields of science. Not just in pattern matching applications as we've seen in some cases already, but in tasks that require actual reasoning and intelligence, particularly programming.

> Also, they have no particular insight into their own codebases.

Why not? Aren't most programming languages in their training datasets, and isn't Python, the language most AI tools are written in, one of the easiest languages to generate? Furthermore, can't AI programmers feed its own codebase into the model via context, RAG, etc. in the same way that most other programmers do?

> I may have missed the marketing you have seen, but I don't see the big AI companies claiming that they are anything but tools that can help humans do things or replace certain human tasks. They do not advertise super human capability in intelligence tasks.

You are downplaying the claims being made by AI companies and its proponents.

According to Sam Altman just a few days ago[1]:

> We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence

> we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them

> We already hear from scientists that they are two or three times more productive than they were before AI.

If a human assisted by AI can be more productive than a human alone, then why isn't this productivity boost producing improvements at a faster rate than what the tech industry has been able to deliver so far? Why aren't AI companies dogfooding their products and delivering actual value to humanity beyond benchmark results and shiny demos?

Again, none of this requires actual superhuman levels of intelligence or reaching the singularity. But just based on what they're telling us their products are capable of, the improvements to their own capabilities should be exponential by now.

[1]: https://blog.samaltman.com/the-gentle-singularity

mlboss
Recursive self-improvement will never happen. We will hit physical limitation before that: energy, rate earth minerals, datacenter etc. The only way we can have recursive self-improvement if Robots take over and start expanding to other planets/solar systems.
fellatio
Nope. Just that 1. Is better than people. 2. Isn't better than people. Pick one!

If the former then yes singularity. The only hope is it's "good will" (wouldn't bet on that) or turning off switches.

If the latter you still need more workers (programmers or whatever they'll be called) due to increased demand for compute solutions.

TeMPOraL
> Nope. Just that 1. Is better than people. 2. Isn't better than people. Pick one!

That's too coarse of a choice. It's better than people at increasingly large number of distinct tasks. But it's not good enough to recursively self-improve just yet - though it is doing it indirectly: it's useful enough to aid researchers and businesses in creating next generation of models. So in a way, the recursion and resulting exponent are already there, we're just in such early stages that it looks like linear progress.

fellatio
Thanks. Your nuanced version is better. In that version I can still ignore most of LinkedIn and Twitter and assume there will still be a need for people. Not just at OMGAD (OpenAI...) but at thousands of companies.
yusina
Moving goal posts? This was a response to the claim that AIs are the new code.
TeMPOraL
Not really. GP claims they expect to see exponential improvements to be impressed - seemingly without realizing how such exponent will look like once it's happening and starts to look obviously exponential.
iLoveOncall
> If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.

Yes and we've actually been able to witness in public the dubious contributions that Copilot has made on public Microsoft repositories.

> who builds the ai

3 to 5 companies iso of the hundreds of thousands who sell software now

bmicraft
The AI isn't much easier, when you consider the "AI" step is actually: create dataset -> train model -> fine-tune model -> run model to train a much smaller model -> ship much smaller model to end devices.
autobodie
I don't think people are misinterpreting. People just don't find it convincing or intriguing.
zie1ony
This is great idea, until you have to build something.
layer8
Until you have to reliably automate something, I would say.
fellatio
Let alone productionize it! And god forbid maintain it. And have support that doesn't crap out.
obiefernandez
Self plug: I wrote a whole bestselling book on this exact topic

https://leanpub.com/patterns-of-application-development-usin...

adriand
It’s like a friend of mine who has an AI company said to me: the future isn’t building a CRM with AI. The future is saying to the AI, act like a CRM.
__loam
And it won't work as well as an actual crm because you've scrubbed all the domain knowledge of that software and how it ought to work out of the organization.
agarren
That jibes with what Nadella said in an interview not too long ago. Essentially, SaaS apps disappear entirely as LLMs interface directly with the underlying data store. The unspoken implication being that software as we understand it goes away as people interface with LLMs directly rather than ~~computers~~ software at all.

I kind of expect that from someone heading a company that appears to have sold-the-farm in an AI gamble. It’s interesting to see a similar viewpoint here (all biases considered)

Vegenoid
> people interface with LLMs directly rather than software at all

What does this mean? An LLM is used via a software interface. I don’t understand how “take software out of the loop” makes any sense when we are using reprogrammable computers.

FridgeSeal
It’s just…the vibe of it man! It’s LLM’s! They’ll just…do things! And stuff…it’ll just happen!!! Don’t worry about the details!!!!
ethbr1
The strongman for it would be:

Our current computing paradigm is built on APIs stacked on APIs.

Those APIs exist to standardize communication between entities.

LLMs are pretty good at communicating between entities.

Why not replace APIs with some form of LLM?

The rebuttal would be around determinism and maintainability, but I don't think the strongman argument is weak enough to dismiss out of hand. Granted: these would likely be highly-tuned, more deterministic specialized LLMs.

Vegenoid
Maybe this is a failure of my imagination, but I still don’t understand how “replace an API with an LLM” makes sense. LLMs just generate text, the only way they have of interacting with software is by generating text that the software can parse - a.k.a. an API call.

Maybe I have some misconception here. I think seeing a program or system that is doing this “replace APIs with LLMs” thing would help me understand.

Eisenstein
I think it would more take the form of 'LLM makes a backend solution using deterministic code that it uses to solve the problem'. Since LLMs are already extremely good at code, then they could code the solution to the problem and use that internally to solve it. They would manage the information exchange and the operations, but the results would be from connected pieces of bespoke software.
__loam
This industry is so tiring
mattgreenrocks
Definitely. And it gets more tiring the more experience you have, because you've seen countless hype cycles come and go with very little change. Each time, the same mantra is chanted: "but this time, it's different!" Except, it usually isn't.

Started learning metal guitar seriously to forget about industry as a whole. Highly recommended!

finallyyes (dead)

This item has no comments currently.