johnnyanmac parent
The nature of how they store data makes it not okay in my books. You massage the data enough and you can generate something that seems infringement worthy.
For closed models the storage problem isn't really a problem, they can be judged by what they produce not how they store it as you don't have access to the actual data. That said, open weight LLMs are probably screwed, if enough of the work remains in the weights such that they can be extracted (even if it's without even talking to the LLM) then the weight file itself represents a copy of the work that's being distributed. So enjoy these competent run-at-home models while you can, they're on track for extinction.
Why doesn’t this apply to humans? If I memorize something such that it can be extracted did I violate the law? It’s only if I choose to allow such extraction to occur then I’m in violation of the law right?
So if I or an LLM simply doesn’t allow said extraction to occur, memorization and copying is not against the law.
I think an important distinction here is distribution... did you tell someone else what you memorized? Is downloading a model akin to distributing that same information?
What if I don't download the model and I just communicate with it. Sort of like chatting with another human. That's not a copyright issue right? I mean that's how most LLMs are deployed today.
I wonder if https://en.wikipedia.org/wiki/Illegal_number comes into play here.