Now you don't really have to be precise at any level to get something 'working'. You may not be familiar with the generated language or libaries but it could look good enough (like the assembly would have looked good enough). So, sure, if you are very familiar with the generated language and libraries and you inspect every line of generated code then maybe you will be ok. But often the reason you are using an LLM is because e.g. you don't understand or use bash frequently enough to get it to do what you want. Well, the LLM doesn't understand it either. So that weird bash construct that it emitted - did you read the documentation for it? You might have if you had to write it yourself.
In the end there could be code in there that nothing (machine or human) understands. The less hard-won experience you have with the target and the more time-pressed you are the more likely it is that this will occur.
The output of the LLM is determined by the weights (parameters of the artificial neural network) estimated in the training as well as a pseudo-random number generator (unless its influence, called "temperature", is set to 0).
That means LLMs behave as "processes" rather than algorithms, unlike any code that may be generated from them, which is algorithmic (unless instrcuted otherwise; you could also tell an LLM to generate an LLM).
What used to be a project doing a CMS backend, now is spent doing configurations on a SaaS product, and if we are lucky, a few containers/serveless for integrations.
There are already AI based products that can automate those integrations if given enough data samples.
Many believe AI will keep using current programming languages as translation step, just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around.
Confused by what you mean. Is this not the case?
You can naturally revert to old ways, by asking for the Assembly manually, and call the Assembler yourself.