kennethallen parent
Running LLMs will be slow and training them is basically out of the question. You can get a Framework Desktop with similar bandwidth for less than a third of the price of this thing (though that isn't NVIDIA).
> Running LLMs will be slow and training them is basically out of the question
I think it's the reverse, the use case for these boxes are basically training and fine-tuning, not inference.