Preferences

Imagine that there's a lot of people who are dismissive even now, when the parrots can write their code or crush them in a philosophical discussion.

They cannot write my code [1] and a photocopier also "wins" a philosophical discussion if you put Hegel snippets on it.

[1] They steal it though to produce bad imitations.

> a photocopier also "wins" a philosophical discussion if you put Hegel snippets on it.

I don't think so, have you tried?

People disagreeing with the article aren't "dismissing AI". Did you read what it said?
Hey Claude, can you help me categorise the tone/ sentiment of this statement, in three words?

"After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed."

Claude: Cynical, dismissive, condescending.

The original post and the rest of the comment are about invent vs arrive (discover?). I'm sure I'll be able to find (parts of) your comments, too, that diverge in sentiment.
bgwalter is clearly dismissing AI. The post has all the telltale signs.

* Rather than the curious "What is it good at? What could I use it for? We instead get "It's not better than me!". That lacks insight and is intentionally sidestepping the point that it has utility for a lot of people who need coding work done.

* Using a bad analogy protected by scare quotes to make an invalid point that suggests a human would be able to argue with a photocopier or a philosophical treatise. It's clearly the case that humans can only argue with an LLM, due to the interactive nature of the dialogue.

* The use of the word "steal" to indicate theft of material when training AI models, again intentionally conflating theft with copyright infringement. But even that suggestion is not accurate: Model training is currently considered fair use and court findings were already trending in this direction. So even the suggestion it's copyright infringement doesn't hold water. Piracy of material would invalidate that, but that's not what happening in the case of bgwalters code, I don't expect. I expect bgwalter published their code online and it was scraped.

Agree with the sibling comment, posting Claude's assessment that mirrors this analysis. Dismissive and cynical is a good way to put it.

I think my last sentence above was phrased poorly, generating some confusion: what I meant to say was "This is my analysis of why it is dismissive and cynical, and Claude seemed to draw the same conclusion based on a sibling post that asked Claude".

To be clear, I've never even used Claude.

You don't have anything yourself to say on the actual the topic, do you?
My position is that AI is going to end up being good at certain things, and it's going to be not so good at others, and that mix will change over time, but generally improve. I don't think it's going to replace all jobs and I don't think it's a world-ender.

My job as an engineer is to understand the technology and understand how to deploy it for the benefit of the people that I work for, up to and including myself. There's no room for dogma here. It's purely curiosity, investigation, and trial and error. See what works, see what doesn't.

Personally, I dislike centralized power because I think it's dangerous. And so, one of my goals is to find ways to use AI in a more distributed context that people have control over. Technology accrues benefits to those who deploy it. Therefore, I'd like to find ways for everyone to be able to deploy good technology.

This is also off-topic. Are you seriously using an LLM to participate in hn comments in this manner? (Will rpdillon now be forever lost as a human?)
If you look at my reply further down, I have never used AI in HN comments, and likely never will. As I said, it defeats the purpose. I'm not sure what I said above that suggests I use AI to write comments.
Thanks, Claude.
All me here. I never use AI on any of my comments on Hacker News, and I likely never will. It defeats the entire purpose.

But I do use AI to better understand things.

This item has no comments currently.