Preferences

This model is super quantized and the quality isn't great, but that's necessary because just like everyone else except for Nvidia and AMD

They shat the bed. They went for super crazy fast compute and not much memory, assuming that models would plateu at a fee billion parameters.

Last year 70b parameters was considered huge, and a good place to standardize around.

Today we have 1t parameter models and we know it still scales linearly with parameters.

So next year we might have 10T parameter LLMs and these guys will still be playing catch up.

All that matters for inference right now is how many HBM chips you can stack and that's it


Cerebras doesn't normally quantize the models. Do you have more information about this?

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal