> I think the real risk is that dumping out loads of boilerplate becomes so cheap and reliable that people who can actually fluently design coherent abstractions are no longer as needed.
Cough front-end cough web cough development. Admittedly, original patterns can still be invented, but many (most?) of us don't need that level of creativity in our projects.
AI can write you an entire CRUD app in minutes, and with some back-and-forth you can have an actually-good CRUD app in a few hours.
But AI is not very good (anecdotally, based on my experience) at writing fintech-type code. It's also not very good at writing intricate security stuff like heap overflows. I've never tried, but would certainly never trust it to write cryptography correctly, based on my experience with the latter two topics.
All of the above is "coding", but AI is only good at a subset of it.
The issue is and always has been maintenance and evolution. Early missteps cause limitations, customer volume creates momentum, and suddenly real engineering is needed.
I’d be a lot more worried about our jobs if these systems were explaining to people how to solve all their problems with a little Emacs scripting. As is they’re like hyper aggressive tech sales people, happy just to see entanglements, not thinking about the whole business cycle.
But I don’t think I’ve seen pure CRUD on anything other than prototype. Add an Identity and Access Management subsystem and the complexity of requirements will explode. Then you add integration to external services and legacy systems, and that’s where the bulk of the work is. And there’s the scalability issue that is always looming.
Creating CRUD app is barely a level over starting a new project with the IDE wizard.
For you, maybe. But for a non-progrmamer who's starting a business or just needs a website it's the difference between hiring some web dev firm and doing it themselves.
> it's the difference between hiring some web dev firm and doing it themselves.
anecdote but i've had a lot of acquaintances who started at both "hiring some web dev firm" and "doing it themselves" with results largely being the same: "help me fix this unmaintainable mess and i will pay you x"...jmo but i suspect llms will allow for the later to go further before the "help me" phase but i feel like that aint going away completely...
My wife's sister and her husband run a small retail shop in $large_city. My sister-in-law taught herself how to set up and modify a website with a shopify storefront largely with LLM help. Now they take online orders. I've looked at the code she wrote and it's not pretty but it generally works. There will probably never be a "help me fix this unmaintainable mess and I will pay you" moment in the life of that business.
The crux of my point is this: In 2015 she would have had to hire somebody to do that work.
This segment of the software industry is where the "LLMs will take our jerbs" argument is coming from.
The people who say "AI is junk and it can't do anything right" are simply operating in a different part of the industry.
Perhaps the debate is on what constitutes "actually-good". Depends where the bar is I suppose.
Definitely this. When I use AIs for web development they do an ok job most of the time. Definitely on par with a junior dev.
For anything outside of that they're still pretty bad. Not useless by any stretch, but it's still a fantasy to think you could replace even a good junior dev with AI in most domains.
I am slightly worried for my job... but only because AI will keep improving and there is a chance it will be as good as me one day. Today it's not a threat at all.
If you think LLMs are “better programmers than you,” well, I have some disappointing news for you that might take you a while to accept.
This is a common take but it hasn't been my experience. LLMs produce results that vary from expert all the way to slightly better than markov chains. The average result might be equal to a junior developer, and the worst case doesn't happen that often, but the fact that it happens from time to time makes it completely unreliable for a lot of tasks.
Junior developers are much more consistent. Sure, you will find the occasional developer that would delete the test file rather than fixing the tests, but either they will learn their lesson after seeing your wth face or you can fire them. Can't do that with llms.
- Language
- Total LOC
- Subject matter expertise required
- Total dependency chain
- Subjective score (audited randomly)
And we can start doing some analysis. Otherwise we're pissing into ten kinds of winds.
My own subjective experience is earth shattering at webapps in html and css (because I'm terrible and slow at it), and annoyingly good but a bit wrong usually in planning and optimization in rust and horribly lost at systems design or debugging a reasonably large rust system.
Besides one point: junior developers can learn from their egregious mistakes, llms can't no matter how strongly worded you are in their system prompt.
In a functional work environment, you will build trust with your coworkers little by little. The pale equivalent in LLMs is improving system prompts and writing more and more ai directives that might or might not be followed.
Or rather, it's more like a contractor. If I don't like the job they did, I don't give them the next job.
But why would you do that? Wouldn't you just have your own library of code eventually that you just sell and sell again with little tweaks? Same money for far less work.
Besides, not all programming work can be abstracted into a library and reused across projects, not because it's technically infeasible, but because the client doesn't want to, cannot for legal reasons or the developer process at the client's organization simply doesn't support that workflow. Those are just the reasons from the top of my head, that I've encountered before, and I'm sure there is more reasons.
> cannot for legal reasons or ...
Sure, you can't copy trade secrets, but that's also not the boilerplate part. Copying e.g. a class hierarchy and renaming all the names and replacing the class contents that represent the domain, won't be a legal problem, because this is not original in the first place.
Some absolutely do. I know programmers who entered web development at the same time as me, and now after decades they're still creating typical CRUD applications for whatever their client today is, using the same frameworks and languages. If it works, makes enough money and you're happy, why change?
> Copying e.g. a class hierarchy and renaming all the names and replacing the class contents that represent the domain, won't be a legal problem, because this is not original in the first place.
Some code you produce for others definitively fall under their control, but obviously depends on the contracts and the laws of the country you're in. But I've written code for others that I couldn't just "abstract into a FOSS library and use in this project", even if it wasn't trade secrets or what not, just some utility for reducing boilerplate.
That is not what I meant. My idea was more like "copy ten lines from this project, then lines from that project, the class from here, but replace every line before the commit ...".
I shouldn't have used the word library, as I did not mean output from the linker, but rather a colloquial meaning of a loose connection of snippets.
People shouldn't be doing this in the first place. Existing abstractions are sufficient for building any software you want.
Software that doesn't need new abstractions is also already existing. Everything you would need already exists and can be bought much more cheaply than you could do it yourself. Accounting software exists, unreal engine exists and many games use it, why would you ever write something new?
This isn't true due to the exponential growth of how many ways you can compose existing abstractions. The chance that a specific permutation will have existing software is small.
But if there is something off the shelf that you can use for the task at hand? Great! The stakeholders want it to do these other 3000 things before next summer.
Or, abstractions in your project form a dependency tree, and the nodes near the root are universal, e.g. C, Postgres, json, while the leaf nodes are abstractions peculiar to just your own project.
So some people are panicking and they are probably right, and some other people are rolling their eyes and they are probably right too. I think the real risk is that dumping out loads of boilerplate becomes so cheap and reliable that people who can actually fluently design coherent abstractions are no longer as needed. I am skeptical this will happen though, as there doesn’t seem to be a way around the problem of the giant indigestible hairball (I.e as you have more and more boilerplate it becomes harder to remain coherent).