Preferences

[flagged]

Yeah, this is why I'm having a hard time taking many programmers serious on this one.

As a general class of folks, programmers and technologists have been putting people out of work via automation since we existed. We justified it via many ways, but generally "if I can replace you with a small shell script, your job shouldn't exist anyways and you can do something more productive instead". These same programmers would look over the shoulder of "business process" and see how folks did their jobs - "stealing" the workflows and processes so they could be automated.

Now that programmers jobs are on the firing block all of a sudden automation is bad. It's hard to sort through genuine vs. self-serving concern here.

It's more or less a case of what comes around goes around to me so far.

I don't think LLMs are great or problem free - or even that the training data set scraped from the Internet is moral or not. I just find the reaction to be incredibly hypocritical.

Learn to prompt, I guess?

If we're talking the response from the OP, people of his caliber are not in any danger of being automated away, it was an entirely reasonable revulsion at an LLM in his inbox in a linguist skinsuit, a mockery of a thank-you email.

I don't see the connection to handling the utilitarianism of implementing business logic. Would anyone find a thank-you email from an LLM to be of any non-negative value, no matter how specific or accurate in its acknowledgement it was? Isn't it beyond uncanny valley and into absurdism to have your calculator send you a Christmas card?

To be clear, my comment was in no way intended towards Rob Pike or anyone of his stature and contributions to the technology field.

It was definitely a less-than-useful comment directed towards the tech bro types that came later when the money started getting good.

People of his caliber is not being automated away but people pay less attention to him and don’t worship him like before so he is butt hurt.
Are people here objecting to Gen AI being used to take their jobs? I mainly see people objecting to the social, legal, and environmental consequences.
What's the problem with that, anyway? I object to training a machine to take/change my job [building them, telling them what to do]. What's more, they want me to pay? Hah. This isn't a charity. I either strike fortune, retire while the getting is good, or simply work harder for nothing. Hmm. I think I'll keep not displacing people, actually. Myself included.

To GP: not all of us who automate go for low hanging fruit, I guess.

To the peer calling this illegitimate [or anyone, really]: without the assistance of an LLM, please break down the foul nature of... let me check my notes, gainful employment.

> Are people here objecting to Gen AI being used to take their jobs?

Yes, even if they don't say it. The other objections largely come from the need to sound more legitimate.

Let me get this straight. You think Rob Pike, is worried about his job being taken? Do you know who he is?
To any person with a view on numbers (who may as well be an AI), ignorant of any authority, he would be someone who is very overpaid and too much of a critical risk factor.
This is a stance that violates tha guidelines of HN.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

https://news.ycombinator.com/newsguidelines.html

Gen AI taking programmer's jobs is 20 years away.

At the moment, it's just for taking money from gullible investors.

Its eating into business letters, essays and indie art generation but programming is a really tough cookie to crack.

It's taking away programmers jobs today. I know of multiple small companies where hires were not made or contractors not engaged with simply due to the additional productivity gained by using Gen AI. This is for mundane "trivial" work that is needed to glue stuff together for the fields those small companies operate within.

It's like how "burger flippers" didn't go extinct due to automation. The burger joint simply mechanised and automated the parts that made sense, and now a lunch shift is handled by 5 employees instead of 20.

They will not replace the calibre of folks like Rob Pike in quite some time, perhaps (and I'd bet on) never.

I will grant you that the hype does not live up to the reality. The vast majority of jobs being taken from US developers are simply being offshored with AI as an excuse - but it is an actual real phenomenon I've personally witnessed.

But is it meaningfully different from the outsource to India craze?

That certainly in the short term took some programmers jobs away. That doesn't mean it pans out in the long term.

Must be nice to read people's minds and use that info in an argument. Tough to beat.
>programmers and technologists have been putting people out of work

I think it's more causing people to do different work. There used to be about 75% of the workforce in agriculture but tractors and the like reduced that to 2% or so. I'm not sure if the people working as programers would be better off if that didn't happen and they were digging potatoes.

I wouldn't be angry if current AI _only_ automated programmers/software engineers. I'd be worried and stressed out, but not angry.

But it also automates _everything else_. Art and self-expression, most especially. And it did so in a way that is really fucking disgusting.

Well put, it's not the automation of programming that bothers me, it's the automation of what it means to be human.
Very elegantly put.
> Now that programmers jobs are on the firing block all of a sudden automation is bad. It's hard to sort through genuine vs. self-serving concern here.

The concern is bigger than developer jobs being automated. The stated goal of the tech oligarchs is to create AGI so most labor is no longer needed, while CEOs and board members of major companies get unimaginably wealthy. And their digital gods allow them to carve up nations into fiefdoms for the coming techno fascist societies they envision.

I want no part of that.

I think there is a difference between automating “things” (as you put it) and getting to the point where people are on stage suggesting that the government becomes a “backstop” to their investments in automation.
I can imagine AI being just as useless in 100 years at creating real value that their parent companies have to resort to circular deals to pump up their stock.
[flagged]
I always wonder if the general sentiment toward genai would be positive if we had wealth redistribution mechanisms in place, so everyone would benefit. Obviously that's not the case, but if you consider the theoretical, do you think your view would be different?
To be honest, I'm not even sure I'm fully on board with the labor theft argument. But I certainly don't think generative AI is such an unambiguous boon for humankind that we should ignore any possible negative externalities just to advance it.
> "To someone who believes that AI training data is built on the theft of people's labor..."

i.e. people who are not hackers. Many (most?) hackers have been against the idea of copyright and intellectual property from the beginning. "Information wants to be free." after all.

Must be galling for people to find themselves on the same side as Bill Gates and his Open Letter to Hobbyists in 1976 which was also about "theft of people's labor".

> believes that AI training data is built on the theft of people's labor

I mean, this is an ideological point. It's not based in reason, won't be changed by reason, and is really only a signal to end the engagement with the other party. There's no way to address the point other than agreeing with them, which doesn't make for much of a debate.

> an 1800s plantation owner saying "can you imagine trying to explain to someone 100 years from now we tried to stop slavery because of civil rights"

I understand this is just an analogy, but for others: people who genuinely compare AI training data to slavery will have their opinions discarded immediately.

We have clear evidence that millions of copyrighted books have been used as training data because LLMs can reproduce sections from them verbatim (and emails from employees literally admitting to scraping the data). We have evidence of LLMs reproducing code from github that was never ever released with a license that would permit their use. We know this is illegal. What about any of this is ideological and unreasonable? It's a CRYSTAL CLEAR violation of the law and everyone just shrugs it off because technology or some shit.
You keep conflating different things.

> We have evidence of LLMs reproducing code from github that was never ever released with a license that would permit their use. We know this is illegal.

What is illegal about it? You are allowed to read and learn from publicly available unlicensed code. If you use that learning to produce a copy of those works, that is enfringement.

Meta clearly enganged in copyright enfringement when they torrented books that they hadn't purchased. That is enfringement already before they started training on the data. That doesn't make the training itself enfringement though.

> Meta clearly enganged in copyright enfringement when they torrented books that they hadn't purchased. That is enfringement already before they started training on the data. That doesn't make the training itself enfringement though.

What kind of bullshit argument is this? Really? Works created using illegally obtained copyrighted material are themselves considered to be infringing as well. It's called derivative infringment. This is both common sense and law. Even if not, you agree that they infringed on copyright of something close to all copyrighted works on the internet and this sounds fine to you? The consequences and fines from that would kill any company if they actually had to face them.

> What kind of bullshit argument is this? Really? Works created using illegally obtained copyrighted material are themselves considered to be infringing as well.

That isn't true.

The copyright to derivative works is owned by the copyright holder of the original work. However using illegaly obtained copies to create a fair use transformative work does not taint your copyright of that work.

> Even if not, you agree that they infringed on copyright of something close to all copyrighted works on the internet and this sounds fine to you?

I agree that they violated copyright when they torrented books and scholarly arguments. I don't think that counts at "close to all copyrighted works on the Internet".

> The consequences and fines from that would kill any company if they actually had to face them.

I don't actually agree that copyright that causes no harm should be met with such steep penalties. I didn't agree when it was being done by the RIAA and even though I don't like facebook, I don't like it here either.

>We know this is illegal

>It's a CRYSTAL CLEAR violation of the law

in the court of reddit's public opinion, perhaps.

there is, as far as I can tell, no definite ruling about whether training is a copyright violation.

and even if there was, US law is not global law. China, notably, doesn't give a flying fuck. kill American AI companies and you will hand the market over to China. that is why "everyone just shrugs it off".

The "China will win the AI race" if we in the West (America) don't is an excuse created by those who started the race in Silicon Valley. It's like America saying it had to win the nuclear arms race, when physicists like Oppenheimer back in the late 1940s were wanting to prevent it once they understood the consequences.
China is doing human gene editing and embryo cloning too, we should get right on that. They're harvesting organs from a captive population too, we should do that as well otherwise we might fall behind on transplants & all the money & science involved with that. Lots of countries have drafts and mandatory military service too. This is the zero-morality darwinian view, all is fair in competition. In this view, any stealing that China or anyone does is perfectly fine too because they too need to compete with the US.
All creative types train on other creative's work. People don't create award winning novels or art pieces from scratch. They steal ideas and concepts from other people's work.

The idea that they are coming up with all this stuff from scratch is Public Relations bs. Like Arnold Schwarzenegger never taking steroids, only believable if you know nothing about body building.

The central difference is scale.

If a person "trains" on other creatives' works, they can produce output at the rate of one person. This presents a natural ceiling for the potential impact on those creatives' works, both regarding the amount of competing works, and the number of creatives whose works are impacted (since one person can't "train" on the output of all creatives).

That's not the case with AI models. They can be infinitely replicated AND train on the output of all creatives. A comparable situation isn't one human learning from another human, it's millions of humans learning from every human. Only those humans don't even have to get paid, all their payment is funneled upwards.

It's not one artist vs. another artist, it's one artist against an army of infinitely replicable artists.

So this essentially boils down to an efficiency argument, and honestly it doesn't really address the core issue of whether it's 'stealing' or not.
What kind of creative types exist outside of living organisms? People can create award winning novels, but a table do not. Water do not. A paper with some math do not.

What is the basis that an LLM should be included as a "creative type"?

Well a creative type can be defined as an entity that takes other people's work, recombines it and then hides their sources.

LLMs seem to match.

Precisely. Nothing is truly original. To talk as though there's an abstract ownership over even an observation of the thing that force people to pay rent to use.. well artists definitely don't pay to whoever invented perspective drawings, programmers don't pay the programming language's creator. People don't pay newton and his descendants for making something that makes use of gravity. Copyright has always been counterproductive in many ways.

To go into details though, under copyright law there's a clause for "fair use" under a "transformative" criteria. This allows things like satire, reaction videos to exist. So long as you don't replicate 1-to-1 in product and purpose IMO it's qualifies as tasteful use.

What the fuck? People also need to pay to access that creative work if the rights owner charges for it, and they are also committing an illegal act if they don't. The LLM makers are doing this illegal act billions of times over for something approximating all creative work in existence. I'm not arguing that creative's make things in a vacuum, this is completely besides the point.
[flagged]
> It's very much based on reason and law.

I have no interest in the rest of this argument, but I think I take a bit of issue on this particular point. I don't think the law is fully settled on this in any jurisdiction, but certainly not in the United States.

"Reason" is a more nebulous term; I don't think that training data is inherently "theft", any more than inspiration would be even before generative AI. There's probably not an animator alive that wasn't at least partially inspired by the works of Disney, but I don't think that implies that somehow all animations are "stolen" from Disney just because of that fact.

Obviously where you draw the line on this is obviously subjective, and I've gone back and forth, but I find it really annoying that everyone is acting like this is so clear cut. Evil corporations like Disney have been trying to use this logic for decades to try and abuse copyright and outlaw being inspired by anything.

It can be based on reason and law without being clear cut - that situation applies to most of reason and law.

> I don't think that training data is inherently "theft", any more than inspiration would be even before generative AI. There's probably not an animator alive that wasn't at least partially inspired by the works of Disney ...

Sure, but you can reason about it, such as by using analogies.

What makes something more or less ideological for you in this context? Is "reason" always opposed to ideology for you? What is the ideology at play here for the critics?
> I mean, this is an ideological point. It's not based in reason

You cant be serious

And environmental damage. And damage to our society. Though nobody here tried to stop LLMs. The genie is out of the bottle. You can still hate it. And of course enact legislation to reduce harm.
When I read your comment, I was “trained” on it too. My neurons were permanently modified by it. I can recall it, to some degree, for some time. Do I necessarily owe you money?
You do owe money for reusing some things that you read, and not for others. Intellectual property exists.
> Intellectual property exists.

A problem in an of itself.

I'm very glad AI is here and is slowly but surely destroying this terrible idea.

Try using some of OpenAI's IP and see what happens. Also, right now you can reuse LLM output as you please. Don't imagine that licensing won't change when the market expansion phase is replaced by the profit extraction phase. Remember those investors pouring in hundreds of billions of dollars? They are expecting a profit.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal