Preferences

That's going to depend on how small the model can be made, and how much you are using it.

If we assume that running locally meant running on a 500W consumer GPU, then the electricity cost to run this non-stop 8 hours a day for 20 days a month (i.e. "business hours") would be around $10-20.

This is about the same as OpenAI or Anthropics $20/mo plans, but for all day coding you would want their $100 or $200/mo plans, and even these will throttle you and/or require you to switch to metered pricing when you hit plan limits.


Neither $20 nor $200 plans cover any API costs.

At $0.17 per million tokens for the smallest gpt model that's still faster rand more powerful than anything you can run locally and cheaper in kilowatts per hour than it would cost you to run locally even if you could.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal