Similar to how it would be the failure of the user/provider if someone thought it was too expensive to order food in, but the reason they thought that was they were looking at the cost of chartering a helicopter form the restaurant to their house.
Inference costs generally have many orders of magnitude to go before it approaches raw human costs & there’s always going to be innovation to keep driving down the cost of inference. This is also ignoring that humans aren’t available 24/7, have varying quality of output depending on what’s going on in their personal lives (& ignoring that digital LLMs can respond quicker than humans, reducing the time a task takes) & require more laborious editing than might be present with an LLM. Basically the hypothetical case seems unlikely to ever come to reality unless you’ve got a supercomputer AI that’s doing things no human possibly could because of the amount of data it’s operating on (at which point, it might exceed the cost but a competitive human wouldn’t exist).