Don't feel sorry for me, I've used vibe coding to create many things, and I still know how to program, so I'll live if it goes away.
Its gotten so bad I'm actively trying to avoid talking about this in circles like Hacker News because people get so heavily and aggressively discredited and ridiculed as if they have no idea what they are doing or are a shill for big AI companies.
I know what I'm doing and actively try to help friends and co-workers use LLMs in a sustainable way, understanding their limitations and the dangers of letting them loose without staying in the loop. Its sad that I can't talk about this without fear of being attacked, especially in communities like Hacker News that I previously valued as being very professional and open, compared to other modern social media.
Its not, but it does matter. LLMs, being next word guessers, perform differently with different inputs. Its not hard to imagine a feedback loop of bad code generating worse code and good code generating more good code.
My ability to get good responses from LLMs has been tied to me writing better code, docstrings, and using autoformatters.
The only thing standing between your LLM and bad code is the quality of the prompt (including context and the hiddem OEM prompt).