Some algorithms are inherently probabilistic (bloom filters are a very common example, HyperLogLog is another). If we accept that probabilistic algorithms are useful, then we can extrapolate that to using LLMs (or other neural networks) for similar useful work.
You can make the LLM/NN deterministic. That was never a problem.
The whole notion of adding LLM prompts as a replacement for code just seems utterly insane to me. It would be a massive waste of resources as we're reprompting AI a lot more frequently than we need to. Also must be fun to debug, as it may or may not work correctly depending on how the LLM model is feeling at that moment. Compilation should always be deterministic, given the same environment.