If I was tutoring a junior developer, and he accidentally deleted the whole source tree or something egregious, that would be a milestone learning point in his career, and he would never ever do it again. But if the LLM does it accidentally, it will be apologetic, but after the next context window clear, it has the same chances of doing it again.
I think if you set off an LLM to do something, and it does a "egregious mistake" in the implementation, and then you adjust the system prompt to explicitly guard against that or go towards a different implementation and you restart from scratch again yet it does the exact same "egregious mistake", then you need to try a different model/tool than the one you've tried that with.
It's common with smaller models, or bigger models that are heavily quanitized that they aren't great at following system/developer prompts, but that really shouldn't happen with the available SOTA models, I haven't had something ignored like that in years by now.
But is this like steel production or piloting (few highly trained experts are in the loop) or more like warehouse work (lots of automation removed any skills like driving or inventory work etc).
Besides one point: junior developers can learn from their egregious mistakes, llms can't no matter how strongly worded you are in their system prompt.
In a functional work environment, you will build trust with your coworkers little by little. The pale equivalent in LLMs is improving system prompts and writing more and more ai directives that might or might not be followed.