Preferences

kennethallen parent
Running LLMs will be slow and training them is basically out of the question. You can get a Framework Desktop with similar bandwidth for less than a third of the price of this thing (though that isn't NVIDIA).

embedding-shape
> Running LLMs will be slow and training them is basically out of the question

I think it's the reverse, the use case for these boxes are basically training and fine-tuning, not inference.

kennethallen OP
The use case for these boxes is a local NVIDIA development platform before you do your actual training run on your A100 cluster.

This item has no comments currently.