> About the second point, I've been under the impression that because LLMs are trained on average code, they infer that the bugs and architectural flaws are desirable
This is really only true about base models that haven’t undergone post training. The big difference between ChatGPT and GPT3 was OpenAI’s instruct fine tuning. Out of the box, language models behave how you describe. Ask them a question and half the time they generate a list of questions instead of an answer. The primary goal of post training is to coerce the model into a state in which it’s more likely to output things as if it were a helpful assistant. The simplest version is text at the start of your context window like: “the following is code was written by a meticulous senior engineer”. After a prompt like that the most likely next tokens will never be the models imitation of a sloppy code. Instruct fine tuning does the same thing but as permanent modifications to the weights of the model.
This is really only true about base models that haven’t undergone post training. The big difference between ChatGPT and GPT3 was OpenAI’s instruct fine tuning. Out of the box, language models behave how you describe. Ask them a question and half the time they generate a list of questions instead of an answer. The primary goal of post training is to coerce the model into a state in which it’s more likely to output things as if it were a helpful assistant. The simplest version is text at the start of your context window like: “the following is code was written by a meticulous senior engineer”. After a prompt like that the most likely next tokens will never be the models imitation of a sloppy code. Instruct fine tuning does the same thing but as permanent modifications to the weights of the model.