The solution is simple: understand what the tool actually does before declaring it impossible to use. But then again, reading documentation requires... what's the word... effort.
Claude Code stores feedback transcripts for only 30 days and has "clear policies against using feedback for model training":
Privacy safeguards
We have implemented several safeguards to protect your data, including limited retention periods for sensitive information, restricted access to user session data, and clear policies against using feedback for model training.
What are you building that doesn’t compete with Anthropic? (Using your brain competes with Anthropic) — major legal risk
How do we justify accepting the lack of privacy on Claude? Is it just for people doing FOSS? You’re cool with them reading your business codebase to verify you aren’t using your brain?
Given it is logically impossible to not compete with general intelligence, and that I expect private github repos to remain private, I feel forced to think Claude Code is a nerd snipe / bad joke / toy