I've co-authored a book that a lot of the models seem to know about. The models consistently get the names of the authors incorrect and quote the material with errors. If the canonical representation of our work is now embedded within AI models, don't we deserve to have it quoted and represented correctly and fairly? If you asked a human who had read the book, I think there is a fair chance they would likely give you the reference to the source material.
I do concede that the book does contain a distillation of material that is also available from other sources, but it also contained a lot of personal experience. That aspect does seem to be lost in this new representation.
I am not saying that letting AI models read the material is wrong, but the hubris in the way models answer questions is annoying.
I do concede that the book does contain a distillation of material that is also available from other sources, but it also contained a lot of personal experience. That aspect does seem to be lost in this new representation.
I am not saying that letting AI models read the material is wrong, but the hubris in the way models answer questions is annoying.