Realtime LLM generation is ~$15/million “words”. By comparison a human writer at the beginning of a career typically earns ~$50k/million words up to ~$1million/million words for experienced writers. That’s about 4-6 orders of magnitude.
Inference costs generally have many orders of magnitude to go before it approaches raw human costs & there’s always going to be innovation to keep driving down the cost of inference. This is also ignoring that humans aren’t available 24/7, have varying quality of output depending on what’s going on in their personal lives (& ignoring that digital LLMs can respond quicker than humans, reducing the time a task takes) & require more laborious editing than might be present with an LLM. Basically the hypothetical case seems unlikely to ever come to reality unless you’ve got a supercomputer AI that’s doing things no human possibly could because of the amount of data it’s operating on (at which point, it might exceed the cost but a competitive human wouldn’t exist).
Inference costs generally have many orders of magnitude to go before it approaches raw human costs & there’s always going to be innovation to keep driving down the cost of inference. This is also ignoring that humans aren’t available 24/7, have varying quality of output depending on what’s going on in their personal lives (& ignoring that digital LLMs can respond quicker than humans, reducing the time a task takes) & require more laborious editing than might be present with an LLM. Basically the hypothetical case seems unlikely to ever come to reality unless you’ve got a supercomputer AI that’s doing things no human possibly could because of the amount of data it’s operating on (at which point, it might exceed the cost but a competitive human wouldn’t exist).