Now consider: the above process is available and cheap to every person in the world with a web browser (we don't need to pay for her to have a plus account). If/when ChatGPT starts doing ridiculous intrusive ads, a simple Gemma 3 1b model will do nearly as good a job) This is faster and easier and available in more languages than anything else, ever, with respect to individual-user tailored customization simply by talking to the model.
I don't care how many pointless messages get sent. This is more valuable than any single thing Google has done before, and I am grateful to Rob Pike for the part his work has played in bring it about.
As IT workers, we all have to prostitute ourselves to some extent. But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer.
OP says, it is jarring to them that Pike is as concerned with GenAI as he is, but didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade. Doesn't sound ridiculous to me.
That said, I get that everyone's socio-political views change are different at different points in time, especially depending on their personal circumstances including family and wealth.
That's the main disagreement, I believe. I'm definitely not an indiscriminate fan of Google. I think Google has done some good, too, and the net output is "mostly bad, but with mitigating factors". I can't say the same about purely AI companies.
No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices.
And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company.
I think those are pretty problematic. They can't pay well (no profits...), and/or they may be politically motivated such that working for them would mean a worse compromise.
> open source foundations
Those dreams end. (Speaking from experience.)
> education, healthcare tech
Not self-sustaining. These sectors are not self-sustaining anywhere, and therefore are highly tied to politics.
> small companies solving real problems
I've tried small companies. Not for me. In my experience, they lack internal cohesion and resources for one associate to effectively support another.
> The "we all have to" framing is a convenient way to avoid examining your own choices.
This is a great point to make in general (I take it very seriously), but it does not apply to me specifically. I've examined all the way to Mars and back.
> And it's telling that this framing always seems to appear when someone is defending their own employer.
(I may be misunderstanding you, but in any case: I've never worked for Google, and I don't have great feelings for them.)
> You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer")
I did!
> so you clearly believe these distinctions matter even though Google itself is an AI company
Yes, I do believe that.
Google has created Docs, Drive, Mail, Search, Maps, Project Zero. It's not all terribly bad from them, there is some "only moderately bad", and even morsels of "borderline good".
The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. It's not some inevitable prostitution everyone must do. Plenty of people make the other choice.
The Google/AI distinction still doesn't hold. Anthropic and OpenAI also created products with clear utility. If Google gets "mixed bag" status because of Docs and Maps (products that exist largely just to feed their ad machine), why is AI "unquestionable cancer"? You're claiming Google's useful products excuse their harms, but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.
I don't perceive it that way. In other words, I don't think I've had a choice there. Once you consider other folks that you are responsible for, and once you consider your own mental health / will to live, because those very much play into your availability to others (and because those other possible workplaces do impact mental health! I've tried some of them!), then "free choice of employer" inevitably emerges as illusory. It's way beyond mere "inconvenience". It absolutely ties into morals, and meaning of one's life.
The universe is not responsible for providing me with employment that ensures all of: (a) financial safety/stability, (b) self-realization, (c) ethics. I'm responsible for searching the market for acceptable options, and shockingly, none seem to satisfy all three anymore. It might surprise you, but the trend for me has been easing up on both (a) and (c) (no mistake there), in order to gain territory on (b). It turns out that my mental health, my motivation to live and work are the most important resources for myself and for those around me. The fact has been a hard lesson that I've needed to trade not only money, but also a pinch of ethics, in order to find my place again. This is what I mean by "inevitable prostitution to an extent". It means you give up something unquestionably important for something even more important. And you're never unaware of it, you can't really find peace with it, but you've tried the opposite tradeoffs, and they are much worse.
For example, if I tried to do something about healthcare or education in my country, that might easily max out the (b) and (c) dimensions simultaneously, but it would destroy my ability to sustain my family. (It's not about "big tech money" vs. "honest pay", but "middle-class income" vs. poverty.) And that question entirely falls into "morality": it's responsibility for others.
> Anthropic and OpenAI also created products with clear utility.
Extremely constrained utility. (I realize many people find their stuff useful. To me, they "improve" upon the wrong things, and worsen the actual bottlenecks.)
> You're claiming Google's useful products excuse their harms,
(mitigate, not excuse)
> but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.
First, it's obviously a value judgment! We're not talking theoretical principles here. It's the direct, rubber-meets-the-road impact I'm interested in.
Second, Google is multi-dimensional. Some of their activity is inexcusably bad. Some of it is excusable, even "neat". I hate most of their stuff, but I can't deny that people I care about have benefited from some of their products. So, all Google does cannot be distilled into a single scalar.
At the same time, pure AI companies are one-dimensional, and I assign them a pretty large magnitude negative value.
Google's DeepMind has been at the forefront of AI research for the past 11+ years. Even before that, Google Brain was making incredible contributions to the field since 2011, only two years after the release of Go.
OpenAI was founded in response to Google's AI dominance. The transformer architecture is a Google invention. It's not an exaggeration to claim Google is one of the main contributors to the insanely fast-paced advancements of LLMs.
With all due respect, you need some insane mental gymnastics to claim AI companies are "unquestionably cancer" while an adtech/analytics borderline monopoly giant is merely a "mixed bag".
Perhaps. I dislike google (have disliked it for many years with varying intensity), but they have done stuff where I've been compelled to say "neat". Hence "mixed bag".
This "new breed of purely AI companies" -- if this term is acceptable -- has only ever elicited burning hatred from me. They easily surpass the "usual evils" of surveillance capitalism etc. They deceive humanity at a much deeper level.
I don't necessarily blame LLMs as a technology. But how they are trained and made available is not only irresponsible -- it's the pinnacle of calculated evil. I do think their evil exceeds the traditional evils of Google, Facebook, etc.
Are those distributed systems valuable primarily to Google, or are they related to Kubernetes et cetera ?
It’s like saying that it’s cool because you worked on some non-evil parts of a terrible company.
I don’t think it’s right to work for an unethical company and then complain about others being unethical. I mean, of course you can, but words are hollow.
Of course, the scale is different but the sentiment is why I roll my eyes at these hypocrites.
If you want to make ethical statements then you have to be pretty pure.
I’m sorry but comparing Google to Stalin or Hitler makes me completely dismiss your opinion. It’s a middle school point of view.
If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time. Our entire world is owned by ads now, with digital and physical garbage polluting the internet and every open space in the real world around us. The marketing is mind-numbing, yet persuasive and well-calculated, a result of psychologists coming up with the best ways to abuse a mind into just buying the product over the course of a century. A total ban on commercial advertising would undo some of the damage done to the internet, reduce pointless waste, lengthen product lifecycles, improve competition, temper unsustainable hype, cripple FOMO, make deceptive strategies nonviable. And all of that is why it will never be done.
but wait, in a few months, "AI" will be be funded entirely by advertising too!
And I'd promptly say: Ads are propaganda, and a security risk because it executes 3rd party code on your machine. All of us run adblockers.
There was no need for me to point out that ads are also their revenue generator. They just had a burning moral question before they proceeded to interop with the propaganda delivery system, I guess.
It would lead to unnecessary cognitive dissonance to convince myself of some dumb ideology to make me feel better about wasting so much of my one (1) known life, so I just take the hit and be honest about it. The moral question is what I do about it, if I intervene effectively to help dismantle such systems and replace them with something better.
It's opt-in, I see all the new games, big budget games, indie games, nothing is missed. There are no unwanted emails, no biased searches, no interrupting ads, it's all on my own terms. And it works!
I really believe we can extend this model to other product categories, even all categories. Not in the exact way as gaming websites, but an opt-in "go to the market to see new cool shit" sort of way. It doesn't have to be propaganda with surveillance technology like it is now.