Missed opportunity for the LLM, could've just switched to Volkswagen CI
Yeah — I had something like this happen as well — the llm wrote a half decent implementation and some good tests, but then ran into issues getting the tests to pass.
It then deleted the entire implementation and made the function raise a “not implemented” exception, updated the tests to expect that, and told me this was a solid base for the next developer to start working on.
I've definitely seen this happen before too. Test-driven development isn't all that effective if the LLM's only stated goal is to pass the tests without thinking about the problem in a more holistic/contextual manner.
Reminds me of trying to train a small neural net to play Robocode ~10+ years ago. Tried to "punish" it for hitting walls, so next morning I had evolved a tanks that just stood still... Then punished it for standing still, ended up with a tanks just vibrating, alternating moving back and forth quickly, etc.
That's great. There's a pretty funny example of somebody training a neural net to play Tetris on the Nintendo entertainment system, and it quickly learned that if it was about to lose to just hit pause and leave the game in that state indefinitely.
While I haven't run into this egregious of an offense, I have had LLMs either "fix" the unit test to pass with buggy code, or, conversely, "fix" the code to so that the test passes but now the code does something different than it should (because the unit test was wrong to start with).
When I came back all the tests were passing!
But as I ran it live a lot of cases were still failing.
Turns out the LLM hardcoded the test values as “if (‘test value’) return ‘correct value’;”!