I'm still waiting for the future where a robot maid does my dishes, hangs my clothes, tidies and vacuums my apartment while I'm working on my piano skills. But at least I now have a robot vacuum with a camera that avoids whatever toys I forgot to pick up.
There's also lots of times I have had to intervene, but we're closer than we are far, at this point, I think.
So I think your take here is a bit outdated. It was good a couple years ago, though.
Now I get a lot of calls from team asking for help fixing some code they got from an AI.. Overall it is improving the code quality from the group, I no longer have to instruct people on basics to set up their approach/solution. Will admit there is a little difficulty dealing with pushback on my guidance because e.g. “well chatgpt said I should use this library” when the core SDK already supports something more recent than the AI was trained on
There was a similar divide in the 2000s when Google Search got ubiquitous and writing code got easier than ever. I know a lot of people quit to become 'managers' because they didn't want to fix code which most of the times was being copied from the internet and pasted. Similar arguments on correctness, verbosity and even future maintainability were made. Everybody knows how that went.
Millennials are just gradually turning into boomers as they enter their 40s.
The fact that LLMs currently do not really understand the answers they're giving you is a pretty significant limitation they have. It doesn't make them useless, but it means they're not as useful at a lot of tasks that people think they can handle. And that limitation is also fundamental to how LLMs work. Can that be overcome? Maybe. There's certainly a ton of money behind it and a lot of smart people are working on it. But is it guaranteed?
Perhaps I'm wrong and we already know that it's simply a matter of time. I'd love to read an technical explanation for why that is, but I mostly see people rolling their eyes at us mere mortals who don't see how this will obviously change everything as if we're too small minded to understand what's going on.
To be extra clear, I'm not saying LLMs won't be a technological innovation as seismic as the invention of the car. My confusion is why for some there doesn't seem to be room for doubt.
LLMs piece together language based on other language they've seen. It's not intelligent, it's just a language tool. Currently we have no idea what will happen once there are no more human inputs to train the LLMs. We might end up wishing we didn't build our whole lives around LLMs.
Cars produce toxic fumes, air pollution, noise pollution with their engine noises and horns, light pollution with their headlights pointed directly into my fucking eye, consume incredible amounts of resources to function, consume a fuckton of resources for road maintainability waste millions of man-hours in soul-crushing traffic jams, all that for them to be slower than me on my fucking bike inside the city.
Yeah the horse and buggy manufacturers were right, cars were a mistake. We just doubled down on that mistake.
When it comes to personal transport the current best invention is the safety bicycle. It's truly a marvel and can never be celebrated enough. A tubular frame, ball bearings, cable actuated brakes and gears, spoke tensioned wheels and pneumatic tyres provide a stiff yet lightweight machine that needs very little maintenance and no more energy than walking.
But unfortunately the car used all of that technology in a hilariously inefficient way, and unbridled use of fossil fuels meant it was attractive to use a vehicle that needs a million joules just to make it move without anything in it. If we weren't so greedy but instead considered each gain carefully we might have never ended up with cars.
But, alas, we're no better than a dog who got access to the food cupboard and made itself sick
https://en.wikipedia.org/wiki/1919_Motor_Transport_Corps_con...
LLM takes my job then we have reached the singularity. Jobs wont matter anymore at that point.
What is your similar plan for LLMs?
Analogies always end somewhere, I’m just curious where yours does.
Don’t we already see a shift in this direction?
C suite is being sold the story that this new tech will let them fire x% of their workforce. The workforce says the tech is not capable of replacing people.
C suite doesn’t have the expertise to understand why and how exactly the tech is not ready but does understand people and suspects that their workforces warnings are just a self preservation impulse.
C suite also gets huge bonuses if they reduce cost.
So, they are very strongly encouraged to believe the story and the ones actually doing the work and knowing the difference are left to watch the companies products get destroyed.
It's true that the limitations won't magically go away. Some may _never_ go away. I have a suspicion that neuroticism and hallucination are intrinsic qualities of intelligence, artificial or otherwise.
Many of the criticisms leveled could readily be applied to a fellow human. It seems what the naysayers really don't like are _other people_, especially imperfect ones.
And yet, I don't see much evidence that software quality is improving, if anything it seems in rapid decline.
Does it matter? Ever since FORTRAN and COBOL made programming easier for the unwashed masses, people have argued that all these 'noobs' entering the field is leading to software quality declining. I'm seeing novice developers in all kinds of fields happily solving complex real world problems and making themselves more productive using these tools. They're solving problems that only a few years ago would require an expensive team of developers and ML-experts to pull off. Is the software a great feat of high quality software engineering? Of course not. But it exists and it works. The alternative to them kludging something together with LLMs and OpenAI API calls isn't high quality software written by a team of experienced software engineers, it is not having the software.
Software quality, for the most part, is a cost center, and as such will always be minimal bearable.
As the civil engineering saying goes, any fool can make a bridge that stands, it takes an engineer to build a bridge that barely stands.
And anyway, all of those concerns are orthogonal to the tooling used, in this case LLMs.
[0] things we now take for granted, such as automated testing, safer languages, ci/cd, etc; makes for far better software than when we used to roll our own crypto in C.
This replaces the most human occupation of all: thinking. So young people go ahead and steal the whole open source corpus that they did not write. And are smug about it.
If your projections of progress are true, at least 90% of the people here who praise the code laundering machines will be made redundant.
1. they will scare the horses. a good team of horses is no match for funky 'automobile'
2. how will they be able to deal with our muddy, messy roads
3. their engines are unreliable and prone to breaking down stranding you in the middle and having to do it yourself..
4. their drivers cant handle the speed, too many miles driven means unsafe driving.. we should stick to horses they are manageable.
Meanwhile I'm watching a community of mostly young people building and using tools like copilot, cursor, replit, jacob etc and wiring up LLMs into increasingly more complex workflows.
this is snapshot of the current state, not a reflection of the future- Give it 10 years