Just as an LLM may be good at spitting out code that looks plausible but fails to work, diffusion models are good at spitting out art that looks shiny but is lacking in any real creativity or artistic expression.
My experience with that is that artistic milieus now sometimes even explicitly admit that the difference is who created the art.
"Human that suffered and created something" => high quality art
"The exact same thing but by a machine" => soulless claptrap
It's not about the end result.
A lot could be written about this but it's completely socially unacceptable.
Whether an analogous thing will happen with beautiful mathematical proofs or physical theories remains to be seen. I for one am curious, but as far as art is concerned, in my view it's done.
This has nothing to do with whether a human or AI created the art, and I don't think it's controversial to say that AI-generated art is derivative; the models are literally trained to mimic existing artwork.
Your "creativity" is just "high temperature" novel art done by the right person/entity.
This was something already obvious to anyone paying attention. Innovation from the "wrong people" was just "sophomoric", derivative or another euphemism, but the same thing from the right person would be a work of genius.
My experience is that they spit out reasonably looking solutions but then they don't even parse/compile.
They are OK to create small spinets of code and completion.
Anything past that they suck.
It's actually hilarious that AI "solved" bullshiting and and artistic fields much better and faster than say reasoning fields like math or programming.
It's the supreme irony. Even 5 years ago the status quo was saying artistic fields were completely safe from the AI apocalypse.