Preferences

gchamonlive parent
Open-weights only are also not enough, we need control of the dataset and training pipeline.

The average user like me wouldn't be able to run pipelines without serious infrastructure, but it's very important to understand how the data is used and how the models are trained, so that we own the model and can assess its biases openly.


tsimionescu
Good luck understanding the biases in a petabyte of text and images and video, or whatever the training set is.
gchamonlive OP
Do you disagree it's important to have access to the data, ease of assessment notwithstanding?
tsimionescu
I view it as more or less irrelevant. LLMs are fundamentally black boxes. Whether you run the black box locally or use it remotely, whether you train it yourself or use a pretrained version, whether you have access to the training set or not, it's completely irrelevant to control. Using an LLM means giving up control and understanding of the process. Whether it's OpenAI or the training data-guided algorithm that controls the process, it's still not you.

Now, running local models instead of using them as a SaaS has a clear purpose: the price of your local model won't suddenly increase ten fold once you start depending on it, like the SaaS models might. Any level of control beyond that is illusory with LLMs.

gchamonlive OP
I on the other hand think it's irrelevant if a technology is a blackbox or not. If it's supposed to fit the opensource/FOSS model of the original post having access to precursors is just as important as having access to the weights.

It's fine for models to have open-weights and closed data. It's only barely fitting the opensource model IMHO though.

tsimionescu
The point of FOSS is control. You want to have access to the source, including build instructions and everything, in order to be able to meaningfully change the program, and understand what it actually does (or pay an expert to do this for you). You also want to make sure that the company that made this doesn't have a monopoly on fixing it for you, so that they can't ask you for exorbitant sums to address an issue you have.

An open weight model addresses the second part of THIS, but not the first. However, even an open weight model with all of the training data available doesn't fix the first problem. Even if you somehow got access to enough hardware to train your own GPT-5 based on the published data, you still couldn't meaningfully fix an issue you have with it, not even if you hired Ilya Sutskever and Yann LeCun to do it for you: these are black boxes that no one can actually understand at the level of a program or device.

visarga
> having access to precursors is just as important as having access to the weights

They probably can't give you the training set as it would amount to publication of infringing content. Where would you store it, and what would you do with it anyway?

gchamonlive OP
If it's infringing content, it's not open and it's not FOSS. For a fully open stack for local LLMs you need open data too.
scottyah
It is an interesting question. Of course everyone should have equal access to the data in theory, but I also believe nobody should be forced to offer it for free to others and I don't think I want to spend tax money having the government host and distribute that data.

I'm not sure how everyone can have access to the data without necessitating another taking on the burden of providing it.

gchamonlive OP
I think torrent is a very good way to redistribute this type of data. You can even selectively sync and redistribute.

I'm also not saying anyone should be forced to disclose training data. I'm only staying that a LLM that's only openweight and not open data/pipeline barely fits the opensource model of the stack mentioned by OP.

This item has no comments currently.