I fully expect LLM results to start including ads, but because of the competition I hope/believe the incentives are much better than they are for, say Google's search monopoly.
It could potentially be more insidious though.
We'll probably start sending prompts to multiple models and comparing the results with lower-power local models.
I would never describe the output I've seen from LLMs as "organic".
The real issue isn't that LLMs lie, it's that they emphasize certain truths over others, shaping perception without saying anything factually incorrect. That makes them harder to detect than traditional ads or SEO spam.
Open-source LLMs and transparency in prompt+context will help a bit, but long-term, we probably need something like reputation scores for LLM output, tied to models, data sources, or even the prompt authors.
When LLM-generated content is pervasive everywhere, and the training data for LLMs is coming from the prior output of LLMs, we're going to be in for some fun. Validation and curation of information are soon going to be more important than they've ever been.
But I don't think there'll be too much intentional manipulation of LLMs, given how decentralized LLMs already are. It's going to be difficult enough getting consistency with valid info -- manipulating the entire ecosystem with deliberately contrived info is going to be very challenging.
In near future, companies will probably be able to pay lots of money to have their products come up better in the comparison. LLMs are smart enough to make the result seem "organic" -- all verifiable information will be true and supported by references, it will only be about proper framing and emphasis, etc.