Preferences

Yes, I am also super interested in cutting the size of models.

However, in a few years today’s large models will run locally anyhow.

My home computer had 16KB RAM in 1983. My $20K research workstation had 192MB of RAM in 1995. Now my $2K laptop has 32GB.

There is still such incredible pressure on hardware development that you can be confident that today’s SOTA models will be running at home before too long, even without ML architecture breakthroughs. Hopefully we will get both.

Edit: the 90’s were exciting for compute per dollar improvements. That expensive Sun SPARC workstation I started my PhD with was obsolete three years later, crushed by a much faster $1K Intel Linux beige box. Linux installed from floppies…


Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal