I guess we build a world where being catastrophically and confidently wrong about many things is completely normalized.
Don’t we already see a shift in this direction?
C suite is being sold the story that this new tech will let them fire x% of their workforce. The workforce says the tech is not capable of replacing people.
C suite doesn’t have the expertise to understand why and how exactly the tech is not ready but does understand people and suspects that their workforces warnings are just a self preservation impulse.
C suite also gets huge bonuses if they reduce cost.
So, they are very strongly encouraged to believe the story and the ones actually doing the work and knowing the difference are left to watch the companies products get destroyed.
Well, we’ll build all sort of APIs for LLMs to plug into.
What is your similar plan for LLMs?
Analogies always end somewhere, I’m just curious where yours does.