On the direct face of it, no, it turns out it doesn't matter: plant cellulose is not toxic to humans, a certain level of it is in many processed foods, and that information isn't secret.
By the time it matters to people, it's at the level where you can tell it's happened: large, pointy chunks, eg, or so much the flavour or texture is ruined. Or toxic contaminants, albeit at the significant risk that one might only be able to tell at the point of suffering from the consequences.
But if we modify the proposition a little, we get a statement about the possibility of a vegan's metaphorical sawdust being cut with ground beef. Now, it's more likely to matter. By and large, dietary choices like that are based on some belief structure, so the presence of the unwanted ingredient could be considered as an attack on the belief system.
When we move the metaphor back to AI generated code, does this reveal a belief system at play? If the resulting program is not poor quality, but the use of AI is objectionable nevertheless, does that make a "no AI in software" stance a sort of veganism for code? (And can we coin a good term for that stance? I vote for hominism, because while I quite like anthropism that leads to anthropic which is obviously not going to work.)
Given there's a regulatory number on acceptable bug parts per million for confectionary, is there a hypothetical acceptable bytes per million for AI-generated code that can still be called hoministic?
I eat meat but I'm one of those people who is ethically opposed to consuming AI content. An AI-vegan you might say.
I've had a shouting fight with someone who tried to spoon feed an AI summary to me in a regular human conversation.
But. I know that people are going to sneak AI content into what I consume even if I do everything within my power to avoid it.
The question is straightforward if immensely complex. Do I have a right to not be fed AI content? Is that even a practical goal? What if I can't tell?