| Making up libs
This is an attack vector. Probe the models for commonly hallucinated libraries (on npm or github or wherever) and then go and create those libraries with malicious code.
> Providing a stub and asking you to fill in the code
This is a perennial issue in chatbot-style apps, but I've never had it happen in Claude Code.
If humans can say "The proof is left as an exercise for the reader", why can't LLMs :)
- Removing problematic tests altogether
- Making up libs
- Providing a stub and asking you to fill in the code