Preferences

Again I’m not in the “AI” industry so I don’t fully understand the economics and don’t run open models locally.

What’s the cost to run this stuff locally, what type of hardware is required. When you say virtually nothing, do you mean that’s because you already have a 2k laptop or gpu?

Again I am only asking because I don’t know. Would these local models run OK on my 2016 Mac Pro intel or do I need to upgrade to the latest M4 chip with 32GB memory for it to work correctly?


The large open-weights models aren't really usable for local running (even with current hardware), but multiple providers compete on running inference for you, so it's reasonable to assume that there is and will be a functioning marketplace.
Basically yes, the useful models need a modernish GPU to get inference running at a usable speed. You can get smaller parameter models 3b/7b running on older laptops, it just won’t produce output at a useful speed.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal