What people seem to be against is progress, or at least the rate of progress. We certainly should stop and think and assess the repercussions of the rate of progress and the response we should have were it to threaten to destabilize society. I don't think we should say, oh, A.I., this is where I will fight. We need to be rational and assess the consequences and find rational answers to ensure social stability (we don't want famine or Hoovervilles).
In my industry -- software engineering -- AI is being blamed for a job market that tumbled a year before GPT even entered the mainstream. There were no code assist tools in 2022, but jobs disappeared. Nevertheless, it is easy to blame AI because it doesnt force us to really examine the causes and thus no policy changes would result.
In SWE-land, we done hire people because of three reasons
1. better open source means you dont need to build it on your own
2. More h1/h4/opt visa workers means you can have loyal and under-market pay workers without attrition risk (even Trump with all his power couldnt tackle this lobby)
3. offshore -- us healthcare and benefits are too expensive, easier to just send the work to other countries
In 2020 there was a global pandemic called COVID-19 that had a pronounced affect on the world economy. Stimulus cheques were given to companies to keep them afloat through this time. Tech companies spent that new capital on hiring and them layed off a lot of workers when they weren't able to sustain them.
A big reason you saw layoffs is because we had massive hiring sprees from short term capital through stimulus cheques.
These days, when a company tells you they are laying off good workers and replacing them, with software that cannot fact check its output, because their audience cannot tell the difference, you should believe them and consider if that is really what you want the world to become.
It also rubs me the wrong way since "AI" quite literally means everything from LLMs to how the ghosts in Pacman move.
Like, you don't hate AI. You hate the way it's being used. It would be weird to say "I hate that computers have the ability to transpose spoken language to text". Or "I can't stand the ambient listening tool being used to treat my father's UTI's while he has Alzheimer's". Or even better "I hate that my credit card company is trying to determine whether someone is fraudulently using it".
And what's worse is that it treats this is a relatively new problem. But rich people abusing the system to make more money at the cost of making others poor is hardly a new thing.
Framing it as "AI" only leads to ignoring the responsibility of those who are making those decisions. It's exactly the same argument behind justifying things as "market forces": it allows everything and makes nobody responsible for it.