If we expand to include all porn, then we can predict:
- The demand for real porn will be reduced; if the LLM can produce porn tailored to the individual, then we're going to see that impact the demand for real porn.
- The disconnect between porn and real sexual activity will continue to diverge. If most people are able to conjure their perfect sexual partner and perfect fantasy situation at will, then real life is going to be a bit of a let-down. And, of course, porn sex is not very like real sex already, so presumably that is going to get further apart [0].
- Women and men will consume different porn. This already happens, with limited crossover, but if everyone gets their perfect porn, it'll be rare to find something that appeals to all sexualities. Again, the trend will be to widen the current gap.
- Opportunities for sex work will both dry up, and get more extreme. OnlyFans will probably die off. Actual live sex work will be forced to cater to people who can't get their kicks from LLM-generated perfect fantasies, so that's going to be the more extreme end of the spectrum. This may all be a good thing, depending on your attitude to sex work in the first place.
I think we end up in a situation where the default sexual experience is alone with an LLM, and actual real-life sex is both rarer and more weird.
I'll keep thinking on it. It's interesting.
[0] though there is the opportunity to make this an educational experience, of course. But I very much doubt any AI company will go down that road.
I think since children and humans will seek education through others and media no matter what we do, we would benefit with a low hanging fruit to even put in a little bit of effort into producing healthy sexual content and educational content for humans in the whole spectrum of age groups. And when we can do this without exploiting anyone new, it does make you think doesn't it.
It could be viewed as criminalising behaviour that we find unacceptable, even if it harms no-one and is done in private. Where does that stop?
Of course this assumes we can definitely, 100%, tell AI-generated CSAM from real CSAM. This may not be true, or true for very long.