I think this attitude is here to stay: people don't like reading something only to realize that it's been written by an LLM. That's only partially because of what the author describes here (low value to word count ratio). More fundamentally, if a human couldn't be bothered to write it, then there better be a very good reason that I as a human am being bothered to read it.
This attitude provides a clue as to 1) the ways we're using LLMs now that will soon seem absurd (an LLM should never make writing longer, only shorter) and 2) the ways LLMs will be used after the novelty wears off, like interpreting loosely specified requests into computable programs and distilling overly long writing to maximize relevance.
My new saying is that if you want build and use an AI process to automate something, the org should revisit whether the process itself is still worthwhile...
This attitude provides a clue as to 1) the ways we're using LLMs now that will soon seem absurd (an LLM should never make writing longer, only shorter) and 2) the ways LLMs will be used after the novelty wears off, like interpreting loosely specified requests into computable programs and distilling overly long writing to maximize relevance.