Obviously there are higher order effects, but same as we wouldn't expect the Homo Erectus to stop playing with stone tools because they'd disrupt their society (which of course they did), I don't understand why we should decide to halt technological progress now.
What we see happening in the workforce with AI isn't reducing labor. We see firms making fewer workers do more work and laying off the rest, as in this case, where workers are talking about "hardly sleeping". Similarly, in my org, workers aren't expected to do any less work since adopting AI tools. This case suggests quality is down as well, but maybe that's subjective.
You mentioned earlier that AI makes labor weaker, but I really don't see a case for it. If anything, given how relatively cheap GenAI is, it should allow most anyone with artistic sensibilities and skill in the area who is willing to leverage it to go into business themselves with minimal capital. Why should GenAI give power to employers, especially if they're just paying another company for the AI models?
That said, I agree with you that AI is not going to lead to people doing less work, in the same way that computers didn't lead to people doing less work.
The entire premise is also CURRENTLY built around copyrighted infringement, which makes any material produced by an LLM questionable legally. Unless the provider you are using has a clause saying they will pay for all your legal bills, you should NOT be using an LLM at work. This includes software development, btw. Until the legal issue is settled once and for all, any company using an LLM may risk becoming liable for copyright infringement. Possibly any individual depending on the setup.
I get that LLMs have problems.
I was recently looking into the differences between a flash drive, an SSD, and an NVMe drive. Flash memory is one of the technologies I had in mind when I wrote my comment.
Flash has a bunch of problems. It can only be written over so many times before it dies. So it needs some kind of wear-leveling abstraction that abstracts over the actual storage space and provides a smaller, virtual storage space that is directed by a controller that knows to equally distribute writes over the actual storage, and avoid dead cells when they manifest.
NVMe extends that with a protocol that allows a very high queue depth that allows the controller to reorder instructions such that throughput can be maximized, making NVMe enabled drives more performant. Virtual address space + reordered operations = successful HDD replacement.
My point here is that LLMs are young, and that we're going to compose them into into larger workflows that allow for predictable results. But that composition, and trial and error, take time. We don't yet have the remedies necessary to make up for the weaknesses of LLMs. I think we will as we explore more, but the technology is still young.
As for copyright infringement, I think copyright has been broken for a long time. It is too brittle in its implementation. Google did essentially the same thing as OpenAI when they indexed webpages, but we all wrote it off as fair use because traffic was directed to the website (presumably to aggregate ad revenue). Now that traffic is diverted from the website, everyone has an issue with the crawling. That is not a principled argument, but rather an argument centered around "Do I get paid?". I think we need to be more honest with ourselves about what we actually believe.
I'm not convinced that generative AI video will _ever_ hit the 'acceptable' threshold, at least with current tech. Fundamentally it lacks a world model, so you get all this nightmarish _wrongness_.
Yes.
> Obviously there are higher order effects, but same as we wouldn't expect the Homo Erectus to stop playing with stone tools because they'd disrupt their society (which of course they did), I don't understand why we should decide to halt technological progress now.
The difference is the relationship of that technology to the individual/masses. When a Homo Erectus invented a tool, he and every member of his species (who learned of it) directly benefited from the technology, but with capitalism that link has been broken. Now Homo Sapiens can invent technologies that may greatly benefit a few, but will be broadly harmful to individuals. AI is likely one of those technologies, as its on the direct path to the elimination broad classes of jobs with no replacement.
This situation would be very different if we either had some kind of socialism or a far more egalitarian form of capitalism (e.g. with extremely diffuse and widespread ownership).
ps- include the technology built to kill the enemy-labor in large numbers. Start with the Atomic Bomb in Japan.. that saved a lot of labor, right?
EDIT: It's worth saying that humans have been killing each other from the dawn of humanity. Studies on both present-day and historical tribal societies generally show a significantly higher homicide rate than what we're used to seeing in even our most dangerous cities and across our biggest wars.
A bit old, but extensive numbers - https://ourworldindata.org/ethnographic-and-archaeological-e...
This is just US propaganda. These numbers come from the fact that the US was "anticipating" a ground invasion of Japan or vice versa.
Which, to be clear, was always a made-up alternative. By the time the atomic bomb was dropped, Japan had already tried to surrender multiple times, both to us and the soviets. The reality is we just wanted to drop an atomic bomb.
But I don't understand why you put the ground invasion plans in quotes - are you claiming that all the effort spent on Operation Downfall[0] was just a misdirection intended to fool everyone, including the high-ranking officers involved in the planning?
Things did work out for Japan in the long run, but I still believe a conditional surrender + no atomic bombs should have been the solution. The US was very greedy with its demands, and I think a large part of that is our history of militarism and our desire to use new weaponry. The atomic bomb was already made, and I think realistically we were just itching to use it.
Of course, real actors have unions and part of the point of AI is to make labor weaker.