Yes, telling a subordinate with natural language what you need is called being a product manager. Problem is, the subordinate has encyclopedic knowledge but it's also extremely dumb in many aspects.
I guess this is good for people that got into CS and hate the craft so prefer doing management, but in many cases you still need in your team someone with a IQ higher than room temperature to deliver a product. The only "fundamental" shift here is killing the entry-level coder at the big corp tasked at doing menial and boilerplate tasks, when instead you can hire a mechanical replacement from an AI company for a few hundred dollars a month.
People have said this every year since the 1950's.
No, it is not happening. LLMs won't help.
Writing code is easy, it's understanding the problem domain is hard. LLMs won't help you understand the problem domain in a formal manner. (In fact they might make it even more difficult.)
It’s in his financial self interest to over inflate LLM’s beyond their “cool math bar trick” level. They are a lossy text compression technique with stolen text sources.
All this “magic” is just function calls behind the scenes doing web/database/math/etc for the LLM.
Anyone who claims LLMs have a soul either truly doesn’t understand how they work (association rules++) or has hitched their financial wagon to this grift. It’s the crypto coin bruhs looking for their next score.
We already have natural languages for human systems and the only way it works is because of shared metaphors and punishment and rewards. Everyone is incentivized to do a good job.
Using a formal language is a feature, not a bug. It is a cornerstone of all human engineering and scientific activity and is the _reason_ why these disciplines are successful.
What you are describing (ie. ditching formal and using natural language) is moving humanity back towards magical thinking, shamanism and witchcraft.
A similar argument was also made by Dijkstra in this brief essay here [1] - which is timely to this debate of why "english is the new programming language" is not well-founded.
I quote a brief snippet here:
"The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid."
[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
"Any sufficiently advanced technology is indistinguishable from magic."
Would you say that ML isn't a successful discipline? ML is basically balancing between "formal language" (papers/algorithms) and "non-deterministic outcomes" (weights/inference) yet it seems useful in a wide range of applications, even if you don't think about LLMs at all.
> towards magical thinking, shamanism and witchcraft.
I kind of feel like if you want to make a point about how something is bullshit, you probably don't want to call it "magical thinking, shamanism and witchcraft" because no matter how good your point is, if you end up basically re-inventing the witch hunt, how is what you say not bullshit, just in the other way?
If we take object detection in computer vision, the detection by itself is not accurate, but it helps with resources management. instead of expensive continuous monitoring, we now have something cheaper which moves the expensive part to be discrete.
But something deterministic would be always more preferable because you only needs to do verification once.
Not yet it isn't; all I am seeing are tools to replace programmers and artists :-/
Where are the tools to take in 400 recipes and spit out all of them in a formal structure (poster upthread literally gave up on trying to get an LLM to do this). Tools that can replace the 90% of office staff who aren't programmers?
Maybe it's a successful low-code industry right now, it's not really a successful AI industry.
You're missing a huge part of the ecosystem, ML is so much more than just "generative AI", which seems to be the extent of your experience so far.
Weather predictions, computer vision, speech recognition, medicine research and more are already improved by various machine learning techniques, and already was before the current LLM/generative AI. Wikipedia has a list of ~50 topics where ML is already being used, in production, today ( https://en.wikipedia.org/wiki/Machine_learning#Applications ) if you're feeling curious about exploring the ecosystem more.
I'm not missing anything; I'm saying the current boom is being fueled by claims of "replacing workers", but the only class of AI being funded to do that are LLMs, and the only class of worker that might get replaced are programmers and artists.
Karpathy's video, and this thread, are not about the un-hyped ML stuff that has been employed in various disciplines since 2010 and has not been proposed as a replacement for workers.
Usefulness of LLMs has yet to be proven. So far there is more marketing in it than actual, real world results. Especially comparing to civil and mechanical engineering, maths, electrical engineering and plethora of disciplines and methods that bring real world results.
What about ML (Machine Learning) as a whole? I kind of wrote ML instead of LLMs just to avoid this specific tangent. Are you feelings about that field the same?
Even if improvements level off and start plateauing, things will still get better and for careful guided, educated use LLMs have already become a great accelerator in many ways. StackOverflow is basically dead now which in itself is a fundamental shift from just 3-4 years ago.