Preferences

what (hypothetically) happens when the cost to run the next giant llm exceeds the cost to hire a person for tasks like this?

Given current models can accomplish this task quite successfully and cheaply, I'd say that if/when that happens it would be a failure of the user (or the provider) for not routing the request to the smaller, cheaper model.

Similar to how it would be the failure of the user/provider if someone thought it was too expensive to order food in, but the reason they thought that was they were looking at the cost of chartering a helicopter form the restaurant to their house.

Realtime LLM generation is ~$15/million “words”. By comparison a human writer at the beginning of a career typically earns ~$50k/million words up to ~$1million/million words for experienced writers. That’s about 4-6 orders of magnitude.

Inference costs generally have many orders of magnitude to go before it approaches raw human costs & there’s always going to be innovation to keep driving down the cost of inference. This is also ignoring that humans aren’t available 24/7, have varying quality of output depending on what’s going on in their personal lives (& ignoring that digital LLMs can respond quicker than humans, reducing the time a task takes) & require more laborious editing than might be present with an LLM. Basically the hypothetical case seems unlikely to ever come to reality unless you’ve got a supercomputer AI that’s doing things no human possibly could because of the amount of data it’s operating on (at which point, it might exceed the cost but a competitive human wouldn’t exist).

the R&D continues

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal