Preferences

jacksnipe parent
Something that really frustrates me about interacting with (some) people who use AI a lot is that they will often tell me things that start “I asked ChatGPT and it said…” stop it!!! If the chatbot taught you something and you understood it, explain it to me. If you didn’t understand or didn’t trust it, then keep it to yourself!

cogman10
I recently had this happen from a senior engineer. What's really frustrating is I TOLD them the issues and how to fix it. Instead of listening to what I told them, they plugged it into GPT and responded with "Oh, interesting this is what GPT says" (Which, spoiler, was similar but lacking from what I'd said).

Meaning, instead of listening to a real-life expert in the company telling them how to handle the problem they ignored my advice and instead dumped the garbage from GPT.

I really fear that a number of engineers are going to us GPT to avoid thinking. They view it as a shortcut to problem solve and it isn't.

colechristensen
>They view it as a shortcut to problem solve and it isn't

Oh but it is, used wisely.

One: it's a replacement for googling a problem and much faster. Instead of spending half an hour or half a day digging through bug reports, forum posts, and stack overflow for the solution to a problem. LLMs are a lot faster, occasionally correct, and very often at least rather close.

Two: it's a replacement for learning how to do something I don't want to learn how to do. Case Study: I have to create a decent-enough looking static error page for a website. I could do an awful job with my existing knowledge, I could spend half a day relearning and tweaking CSS, elements, etc. etc. or I could ask an LLM to do it and then tweak the results. Five minutes for "good enough" and it really is.

LLMs are not a replacement for real understanding, for digging into a codebase to really get to the core of a problem, or for becoming an expert in something, but in many cases I do not want to, and moreover it is a poor use of my time. Plenty of things are not my core competence or anywhere near the goals I'm trying to achieve. I just need a quick solution for a topic I'm not interested in.

ijidak
This exactly!

There are so many things that a human worker or coder has to do in a day and a lot of those things are non-core.

If someone is trying to be an expert on every minor task that comes across their desk, they were never doing it right.

An error page is a great example.

There is functionality that sets a company apart and then there are things that look the same across all products.

Error pages are not core IP.

At almost any company, I don't want my $200,000-300,000 a year developer mastering the HTML and CSS of an error page.

vuserfcase
>Oh but it is, used wisely.

Sufficiently advanced orange juice extractor is the solution to any problem. Doesen't necessarily mean you should build the sufficient part.

>One: it's a replacement for googling a problem and much faster

This is more to do with the problem that google results have gone downhill very rapidly. It used to be you could find what you were looking for very fast and solve a problem.

>I could ask an LLM to do it and then tweak the results. Five minutes for "good enough" and it really is.

When the cost of failures is low, a hackjob can be economical, like a generated picture for entertainment or a static error page. Miscreating a support for a bridge it is not very economical

jsight
I wonder if this is an indication that they didn't really understand what you said to begin with.
colechristensen
If I had a dollar for every time I told someone how to fix something and they did something else...

Let's just say not listening to someone and then complaining that doing something else didn't work isn't exactly new.

silversmith
I often do this - ask a LLM for an answer when I already have it from an expert. I do it to evaluate the ability of the LLM. Usually not in the presence of said expert tho.
namaria
Just using LLMs on the (few) things I have specialist knowledge of it's clear they are extremely limited. I get absurdly basic mistakes and I am very wary of even reading LLM output about topics I don't command. It's easy to get stuck on dead ends reasoning wise even by getting noisy input.
tharant
Is it possible that what happened was an impedance mismatch between you and the engineer such that they couldn’t grok what you told them but ChatGPT was able to describe it in a manner they could understand? Real-life experts (myself included, though I don’t claim to be an expert in much) sometimes have difficulty explaining domain-specific concepts to other folks; it’s not a flaw in anyone, folks just have different ways of assembling mental models.
kevmo314
Whenever someone has done that to me, it's clear they didn't read the ChatGPT output either and were sending it to me as some sort of "look someone else thinks you're wrong".
tharant
Again, is it possible you and the other party have (perhaps significantly) different mental models of the domain—or maybe different perspectives of the issues involved? I get that folks can be contrarian (sadly, contrariness is probably my defining trait) but it seems unlikely that someone would argue that you’re wrong by using output they didn’t read. I see impedance mismatches regularly yet folks seem often to assume laziness/apathy/stupidity/pride is the reason for the mismatch. Best advice I ever received is “Assume folks are acting rationally, with good intention, and with a willingness to understand others.” — which for some reason, in my contrarian mind, fits oddly nicely with Hanlon’s razor but I tend to make weird connections like that.
kevmo314
> is it possible you and the other party have (perhaps significantly) different mental models of the domain—or maybe different perspectives of the issues involved?

Yes, however typically if that's the case they will respond with some variant of "ChatGPT mentioned xyz so I started poking in that direction, does that make sense?" There is a markedly different response when people are using ChatGPT to try to understand better and that I have no issue with.

I get what you're suggesting but I don't think people are being malicious, it's more that the discussion has gotten too deep and they're exhausted so they'd rather opt out. In some cases yes it does mean the discussion could've been simplified, but sometimes when it's a pretty deep, technical reason it's hard to avoid.

A concrete example is we had to figure out a bug in some assembly code once and we were looking at a specific instruction. I didn't believe that instruction was wrong and I pointed at the docs suggesting it lined up with what we were observing it doing. Someone responded with "I asked ChatGPT and here's what it said: ..." without even a subsequent opinion on the output of ChatGPT. In fact, reading the output it basically restated what I said, but said engineer used that as justification to rewrite the instruction to something else. And at that point I was like y'know what, I just don't care enough.

Unsurprisingly, it didn't work, and the bug never got fixed because I lost interest in continuing the discussion too.

I think what you're describing does happen in good faith, but I think people also use the wall of text that ChatGPT produces as an indirect way to say "I don't care about your opinion on this matter anymore."

cogman10
Definitely a possibility.

However, I have a very strong suspicion they also didn't understand the GPT output.

To flush out the situation a bit further, this was a performance tuning problem with highly concurrent code. This engineer was initially tasked with the problem and they hadn't bothered to even run a profiler on the code. I did, shared my results with them, and the first action they took with my shared data was dumping a thread dump into GPT and asking it where the performance issues were.

Instead, they've simply been littering the code with timing logs in hopes that one of them will tell them what to do.

59nadir
I'm sorry, how is this a "senior engineer"? Is this a "they worked in the industry for 6 years and are now senior" type situation or are they an actual senior engineer? Because it seems like they're lacking the basics to work on what you yourself seem to consider senior engineer problems for your project.

Also, what is your history and position in the company? It seems odd that you'd get completely ignored by this supposed senior engineer (something that usually happens more often with overconfident juniors) if you have meaningful experience in the field and domain.

cogman10
> how is this a "senior engineer"? Is this a "they worked in the industry for 6 years and are now senior" type situation...

Yeah, this is the situation exactly, though I've known a few seniors that were senior just because they've hung around and not experience.

> what is your history and position in the company? It seems odd that you'd get completely ignored by this supposed senior engineer

Been with the company for over a decade at this point. I think I have a pretty good reputation generally. Someone sent me a "This is why cogman10 is the GOAT" message for some of my technical interactions on large public team chats.

Why I'm being ignored? I have a bunch of guesses but nothing I'm willing to share.

tharant
It sounds like the engineer may have little/no experience with concurrency; a lot of folks (myself included) sometime struggle with how various systems handle concurrency/parallelism and their side effects. Perhaps this is an opportunity for you to “show not tell” them how to do it.

But I think my point still holds—it’s not the tool that should be blamed; the engineer just needs to better understand the tool and how/when to use it appropriately.

Of course, our toolboxes just keep filling up with new tools which makes it difficult to remember how to use ‘em all.

delusional
Those people weren't engineers to start with.
layer8
Software engineers rarely are.

I’m saying this tongue in cheek, but there’s some truth to it.

throwanem
There is much truth. Railway engineers 'rarely were' too, once upon a time, and for in my view essentially the same reasons.
throwanem
You should ask yourself why this organization wants engineering advice from a chatbot more than from you.

I doubt the reason has to do with your qualities as an engineer, which must be basically sound. Otherwise why bother to launder the product of your judgment, as you described here someone doing?

tharant
> I really fear that a number of engineers are going to us GPT to avoid thinking. They view it as a shortcut to problem solve and it isn't.

How is this sentiment not different from my grandfather’s sentiment that calculators and computers (and probably his grandfather’s view of industrialization) are a shortcut to avoid work? From my perspective most tools are used as a shortcut to avoid work; that’s kinda the while point—to give us room to think about/work on other stuff.

parliament32
Because calculators aren't confidently wrong the majority of the time.
tharant
In my experience, and for use-cases that are carefully considered, language models are not confidently wrong a majority of the time. The trick is understanding the tool and using it appropriately—thus the “carefully considered” approach to identifying use-cases that can provide value.
namaria
In the very narrow fields where I have a deep understanding, LLM output is mostly garbage. It sounds plausible but doesn't stand up to scrutiny. The basics that it can regurgitate from wikipedia sound mostly fine but they are already subtly wrong as soon as they depart from stating very basic facts.

Thus I have to assume that for any topic I do not fully understand - which is the vast majority of human knowledge - it is worse than useless, it is actively misleading. I try to not even read much of what LLMs produce. I might give it some text and riff about it if I need ideas, but LLMs are categorically the wrong tool for factual content.

vuserfcase
A use-case that can be carefully considered requires more knowledge about the use-case than the LLM, it requires you to understand the specific model's training and happy paths, it requires more time to make it output the thing you want than just doing it yourself. If you don't know enough about the subject or the model, you will get confident garbage
tharant
> A use-case that can be carefully considered requires more knowledge about the use-case than the LLM

I would tend to agree with that assertion…

> it requires you to understand the specific model's training and happy paths

But I strongly disagree with that assertion; I know nothing of commercial models’ training corpus, methodology, or even their system prompts; I only know how to use them as a tool for various use-cases.

> it requires more time to make it output the thing you want than just doing it yourself.

And I strongly disagree with that one too. As long as the thing you want it to output is rooted in relatively mainstream or well-known concepts, it’s objectively much faster than you/we are; maybe it’s more expensive but it’s also crazy fast—which is the point of all tools—and the precision/accuracy of most speedy tools can be often deferred until a later step in the process.

> If you don't know enough about the subject or the model, you will get confident garbage

Once you step outside their comfort zone (their training), well, yah… they do all tend to be unduly confident in their responses—I’d argue however that it is a trait they learned from us; we really like to be confident even when we’re wrong and that trait is borne out dramatically across the internet sources on which a lot of these models were trained.

stevage
Did you grandpa think that calculators made engineers worse at their jobs?
tharant
I don’t know for certain (he’s no longer around) but I suspect he did. The prevalence of folks who nowadays believe that Gen-AI makes everything worse suggests to me that not much has changed since his time.

I get it; I’m not an AI evangelist and I get frustrated with the slop too; Gen-AI (and many of the tools we’ve enjoyed over the past few millennia) was/is lauded as “The” singular tool that makes everything better; no tool can fulfill that role yet we always try to shoehorn our problems into a shape that fits the tool. We just need to use the correct tools for the job; in my mind, the only problem right now is that we have a really capable tool and have identified some really valuable use-cases for that tool yet we also keep trying to use it for (what I believe are, given current capabilities) use-cases that don’t fit the tool.

We’ll figure it out but, in the meantime, while I don’t like to generalize that a tech or its use-cases are objectively good/bad, I do tend to have an optimistic outlook for most tech—Gen-AI included.

evandrofisico
It is supremely annoying when i ask in a group if someone has experience with a tool or system and some idiot copies my question into some LLM and paste the answer. I can use the LLM just like anyone, if i'm asking for EXPERIENCE it is because I want the opinion of a human who actually had to deal with stuff like corner cases.
ModernMech
It's the 2025 version of lmgtfy.
layer8
Nah, that’s different. Lmgtfy has nothing to do with experience, other than experience in googling. Lmgtfy applies to stuff that can expediently be googled.
ModernMech
In my experience, usually what people had done was take your question on a forum, go to lmgtfy, paste the exact words in and then link back to it. As if to say "See how easy that was? Why are you asking us when you could have just done that?"

Yes is true there could have been a skill issue. But it could also be true that the person just wanted input from people rather than Google. So that's why I drew the connection.

layer8
I largely agree with your description, and I think that’s different from the above case of explicitly asking for experience and then someone posing the question to an LLM. Also, when googling, you typically (used to) get information written down by people, from a much larger pool and better curated via page ranking, than whoever you are asking. So it’s not like you were getting better quality by not googling, typically.
XorNot
In my experience what happened was the top hit for the question was a topical forum, with a lmgtfy link as a response to the exact question I'm googling.
soulofmischief
The whole point of paying a domain expert is so that you don't have to google shit all day.
jacksnipe OP
That’s exactly how I feel
jsheard
If it's not worth writing, it's not worth reading.
floren
Reminds me of something I wrote back in 2023: "If you wrote it with an LLM, it wasn't worth writing" https://jfloren.net/b/2023/11/1/0
ToValueFunfetti
There's a lot of documentation out there that I've found was left unwritten but that I would have loved to read
pixl97
I mean, there is a lot of hand written crap to, so even that isn't a good rule.
meindnoch
Both statements can be true at the same time, even though they seem to point in different directions. Here's how:

1. *"If it's not worth writing, it's not worth reading"* is a normative or idealistic statement — it sets a standard or value judgment about the quality of writing and reading. It suggests that only writing with value, purpose, or quality should be produced or consumed.

2. *"There is a lot of handwritten crap"* is a descriptive statement — it observes the reality that much of what is written (specifically by hand, in this case) is low in quality, poorly thought-out, or not meaningful.

So, putting them together:

* The first expresses *how things ought to be*. * The second expresses *how things actually are*.

In other words, the existence of a lot of poor-quality handwritten material does not invalidate the ideal that writing should be worth doing if it's to be read. It just highlights a gap between ideal and reality — a common tension in creative or intellectual work.

Would you like to explore how this tension plays out in publishing or education?

idkfasayer (dead)
palata
> If it's not worth writing, it's not worth reading.

It does NOT mean, AT ALL, that if it is worth writing, it is worth reading.

Logic 101?

colecut
That rule does not imply the inverse
pixl97
I mean we have automated systems that 'write' things like tornado warnings. Would you rather we have someone hand write that out?

It seems the initial rule seems rather worthless.

colecut
1. I think the warnings are generally "written" by humans. Maybe some variables filled in during the automation.

2. So a rule with occasional exceptions is worthless, ok

leptons
>I mean, there is a lot of hand written crap to

You know how I know the difference between something an AI wrote and something a human wrote? The AI knows the difference between "to" and "too".

I guess you proved your point.

It is a necessary but not sufficient condition, perhaps?
namaria
Necessary != sufficient.
Frost1x
I work in a corporate environment as I’m sure many others do. Many executives have it in their head that LLMs are this brand new efficiency gain they can pad profit margins with, so you should be using it for efficiency. There’s a lot of push for that, everywhere where I work.

I see email blasts suggesting I should be using it, I get peers saying I should be using it, I get management suggesting I should use it to cut costs… and there is some truth there but as usual, it depends.

I, like many others, can’t be asked to take on inefficiency in the name of efficiency ontop of currently most efficient ways to do my work. So I too say “ChatGPT said: …” because I dump lots of things into it now. Some things I can’t quickly verify, some things are off, and in general it can produce far more information than I have time to check. Saying “ChatGPT said…” is the current CYA caveat statement around the world of: use this thing but also take liability for it. No, if you practically mandate I use something, the liability falls on you or that thing. If it’s a quick verify I’ll integrate it into knowledge. A lot of things aren’t.

parliament32
> I see email blasts suggesting I should be using it, I get peers saying I should be using it, I get management suggesting I should use it to cut costs

The ideal scenario: you write a few bulletpoints and ask Copilot to turn it into a long-form email to send out. Your receiving coworker then asks Copliot to distill it back into a few bullet points they can skim.

You saved 5 minutes but one of your points was ignored entirely and 20% of your output is nonsensical.

Your coworker saved 2 minutes but one of their bulletpoints was hallucinated and important context is missing from the others.

Microsoft collects a fee from both of you and is the only winner here.

rippleanxiously
It just feels to me like a boss walking into a car mechanic's shop holding some random tool, walking up to a mechanic, and:

"Hey, whatcha doin?"

"Oh hi, yea, this car has a slight misfire on cyl 4, so I was just pulling one of the coilpacks to-"

"Yea alright, that's great. So hey! You _really_ need to use this tool. Trust me, it's gonna make your life so much easier"

"umm... that's a 3d printer. I don't really think-"

"Trust me! It's gonna 10x your work!"

...

I love the tech. It's the evangelists that don't seem to bother researching the tech beyond making an account and asking it to write a couple scripts that bug me. And then they proclaim it can replace a bunch of other stuff they don't/haven't ever bothered to research or understand.

yoyohello13
Seriously. Being able to look up stuff using AI is not unique. I can do that too.

This is kind of the same with any AI gen art. Like I can go generate a bunch of cool images with AI too, why should I give a shit about your random Midjourney output.

kristopolous
Comfyui workflows, fine-tuning models, keeping up with the latest arxiv papers, patching academic code to work with generative stacks, this stuff is grueling.

Here's an example https://files.meiobit.com/wp-content/uploads/2024/11/22l0nqm...

Being dismissive of AI art is like those people who dismiss electronic music because there's a drum machine.

Doing things well still requires an immense amount of skill and exhaustive amount of effort. It's wildly complicated

Makes even less sense when you put it like that, why not invest that effort into your own skills instead?
kristopolous
It is somebody's own skill.

Photographers are not painters.

People who do modular synths aren't guitarists.

Technical DJing is quite different from tapping on a Spotify app on a smartphone.

Just because you've exclusively exposed yourself to crude implementations doesn't mean sophisticated ones don't exist.

delfinom
But you just missed the point.

People aren't trying to push photographs into painted works displays

People who do modular synths aren't typically trying to sell their music as country/rock/guitar based music.

A 3D modeler of a statue isn't pretending to be a sculpturist.

People pushing AI art are trying to slide it right into "human art" displays. Because they are talentless otherwise.

I mean… I have a fancy phone camera in my pocket too, but there are photographers who, with the same model of fancy phone camera, do things that awe and move me.

It took a solid hundred years to legitimate photography as an artistic medium, right? To the extent that the controversy still isn’t entirely dead?

Any cool images I ask AI for are going to involve a lot less patience and refinement than some of these things the kids are using AI to turn out…

For that matter, I’ve watched friends try to ask for factual information from LLMs and found myself screaming inwardly at how vague and counterproductive their style of questioning was. They can’t figure out why I get results I find useful while they get back a wall of hedging and waffling.

namaria
> It took a solid hundred years to legitimate photography as an artistic medium, right?

Not really.

"In 1853 the Photographic Society, parent of the present Royal Photographic Society, was formed in London, and in the following year the Société Française de Photographie was founded in Paris."

https://www.britannica.com/technology/photography/Photograph...

Not that photographic art wasn’t getting made, more that the doyens of the Finer Arts would tend to dismiss work in that medium as craft, trade, or low art—that they’d dismiss the act of photographic production as “mere capture” as opposed to creative interpretation, or situate the artistic work in the darkroom afterward where people used hands and brushes and manual aesthetic judgment.

It’s been depressingly long since school, but am I wrong in vaguely remembering the controversy stretching through Art in the Age of Mechanical Reproduction and well into the Warhol era?

https://news.harvard.edu/gazette/story/2010/10/when-photogra...

And I guess legitimacy doesn’t fully depend on the whims of museums and collectors, but to hear Christie’s tell it, they didn’t start treating the medium as fine art until 1972–and then, almost more as antiquities than as works of art—

https://www.christies.com/en/stories/how-photography-became-...

In much the same way as there are tons of Polaroids that are not art and a few that unambiguously are (e.g. [0]); there’s a lot of lazy AI imagery, but there also seem to be some unambiguously artful endeavors (e.g. [1]), no?

[0] https://stephendaitergallery.com/exhibitions/dawoud-bey-pola...

[1] https://www.clairesilver.com/

h4ck_th3_pl4n3t
How can you be so harsh on all the new kids with Senior Prompt Engineer in their job titles?

They have to prove to someone that they're worth their money. /s

esafak
I had to deal with someone who tried to check in hallucinated code with the defense "I checked it with chatGPT!"

If you're just parroting what you read, what is it that you do here?!

I hope you dealt with them by firing them.
esafak
Yes, unfortunately. This was the last straw, not the first.
giantg2
Manage people?
then what the fuck are they doing commiting code? leave that to the coders
giantg2
That sounds good, but not might be how it works in Chapter Lead models.
hashmush
As much as I'm also annoyed by that phrase, is it really any different from:

- I had to Google it...

- According to a StackOverflow answer...

- Person X told me about this nice trick...

- etc.

Stating your sources should surely not be a bad thing, no?

mentalpiracy
It is not about stating a source, the bad thing is treating chatGPT as an authoritative source like it is a subject matter expert.
silversmith
But is "I asked chatgpt" assigning any authority to it? I use precisely that sentence as a shorthand for "I didn't know, looked it up in the most convenient way, and it sounded plausible enough to pass on".
jacksnipe OP
In my own experience, the vast majority of people using this phrase ARE using it as a source of authority. People will ask me about things I am an actual expert in, and then when they don’t like my response, hit me with the ol’ “well, I asked chatGPT and it said…”
jstanley
I think you are misunderstanding them. I also frequently cite ChatGPT, as a way to accurately convey my source, not as a way to claim it as authoritative.
mirrorlake
It's a social-media-level of fact checking, that is to say, you feel something is right but have no clue if it actually is. If you had a better source for a fact, you'd quote that source rather than the LLM.

Just do the research, and you don't have to qualify it. "GPT said that Don Knuth said..." Just verify that Don said it, and report the real fact! And if something turns out to be too difficult to fact check, that's still valuable information.

stonemetal12
In general those point to the person's understanding being shallow. So far when someone says "GPT said..." it is a new low in understanding, and there is no more to the article they googled or second stackOverflow answer with a different take on it, it is the end of the conversation.
spiffyk
Well, it is not, but the three "sources" you mention are not worth much either, much like ChatGPT.
bloppe
SO at least has reputation scores and people vote on answers. An answer with 5000 upvotes, written by someone with high karma, is probably legit.
>but the three "sources" you mention are not worth much either, much like ChatGPT.

I don't think I've ever seen anyone lambasted for citing stackoverflow as a source. At best, they chastised for not reading the comments, but nowhere as much pushback as for LLMs.

From what I’ve seen, Stack Overflow answers are much more reliable than LLMs.

Also, using Stack Overflow correctly requires more critical thinking. You have to determine whether any given question-and-answer is actually relevant to your problem, rather than just pasting in your code and seeing what the LLM says. Requiring more work is not inherently a good thing, but it does mean that if you’re citing Stack Overflow, you probably have a somewhat better understanding of whatever you’re citing it for than if you cited an LLM.

spiffyk
I have personally always been kind of against using StackOverflow as a sole source for things. It is very often a good pointer, but it's always a good idea to cross-check with primary sources. Otherwise you get all sorts of interesting surprises, like that Razer Synapse + Docker for Windows debacle. Not to mention that you are technically not allowed to just copy-paste stuff from SO.
throwaway2037

    > Not to mention that you are technically not allowed to just copy-paste stuff from SO.
Sure you can. Over the last ten years, I have probably copied at least 100 snippets of code from StackOverflow in my corporate code base (and included a link to the original code). The stuff that was published before Generation AI Slop started is unbeatable as a source of code snippets. I am a developer for internal CRUD apps, so we don't care about licenses (except AGPL due to FUD by legal & compliance teams). Anything goes because we do not distribute our software externally.
mynameisvlad
I mean, if all they did is regurgitate a SO post wholesale without checking the correctness or applicability, and the answer was in fact not correct or applicable, they would probably get equally lambasted.

If anything, SO having verified answers helps its credibility slightly compared to a LLM which are all known to regularly hallucinate (see: literally this post).

dpoloncsak
...isn't that exactly why someone states that?

"Hey, I didn't study this, I found it on Google. Take it with a grain of caution, as it came from the internet" has been shortened to "I googled it and...", which is now evolving to "Hey, I asked chatGPT, and...."

rhizome
All three of those should be followed by "...and I checked it to see if it was a sufficient solution to X..." or words to that effect.
billyoneal
The complaint isn't about stating the source. The complaint is about asking for advice, then ignoring that advice. If one asks how to do something, get a reply, then reply to that reply 'but Google says', that's just as rude.
kimixa
It's a "source" that cannot be reproduced or actually referenced in any way.

And all the other examples will have a chain of "upstream" references, data and discussion.

I suppose you can use those same phrases to reference things without that, random "summaries" without references or research, "expert opinion" from someone without any experience in that sector, opinion pieces from similarly reputation-less people etc. but I'd say they're equally worthless as references as "According to GPT...", and should be treated similarly.

It depends on if they are just repeating things without understanding, or if they have understanding. My issue is that people that say "I asked gpt" is that they often do not have any understanding themselves.

Copy and pasting from ChatGPT has the same consequences as copying and pasting from StackOverflow, which is to say you're now on the hook supporting code in production that you don't understand.

We cannot blame the tools for how they are used by those yielding them.

I can use ChatGPT to teach me and understand a topic or i can use it to give me an answer and not double check and just copy paste.

Just shows off how much you care about the topic at hand, no?

theamk
If you used ChatGPT to teach you the topic, you'd write your own words.

Starting the answer with "I asked ChatGPT and it said..." almost 100% means the poster did not double-check.

(This is the same with other systems: If you say, "According to Google...", then you are admitting you don't know much about this topic. This can occasionally be useful, but most of the time it's just annoying...)

multjoy
How do you know that ChatGPT is teaching you about the topic? It doesn't know what is right or what is wrong.
It can consult any sources about any topic, ChatGPT is as good at teaching as the pupil's capabilities to ask the right questions, if you ask me
misnome
We can absolutely blame the people selling and marketing those tools.
Yeah, marketing always seemed to me like a misnomer or doublespeak for legal lies.

All marketing departments are trying to manipulate you to buy their thing, it should be illegal.

But just testing out this new stuff and seeing what's useful for you (or not) is usually the way

layer8
This subthread was about blaming people, not the tool.
my bad I had just woke up!
jacksnipe OP
I see nobody here blaming tools and not people!
nraynaud
the first 2 bullet points give you an array of answers/comments helping you cross check (also I'm a freak, and even on SO, I generally click on the posted documentation links).
JohnFen
I agree wholeheartedly.

"I asked X and it said..." is an appeal to authority and suspect on its face whether or not X is an LLM. But when it's an LLM, then it's even worse. Presumably, the reason for the appeal is because the person using it considers the LLM to be an authoritative or meaningful source. That makes me question the competence of the person saying it.

godelski

  > Something that really frustrates me about interacting with
Something that frustrates me with LLMs is that they are optimized such that errors are as silent as possible.

It is just bad design. You want errors to be as loud as possible. So they can be traced and resolved. On the other hand, LLMs optimize human preference (or some proxy of this). While humans prefer accuracy, it would be naive to ignore all the other things that optimize this objective. Specifically, humans prefer answers that they don't know are wrong over those that they do know are wrong.

This doesn't make LLMs useless but certainly it should strongly inform how we use them. Frankly, you cannot trust outputs, so you have to verify. I think this is where there's a big divergence between LLM users (and non-users). Those that blindly trust and those that don't (extreme case is non-users). If you need to constantly verify AND recognize that verification is extra hard (because it is optimized to be invisible to you), it can create extra work, not less.

It really is two camps and I think it says a lot:

  - "Blindly" trust
  - "Trust" but verify
Wide range of opinions in these two camps, but I think it comes down to some threshold of default trust or default suspicion.
candiddevmike
This happens to me all the time at work. People have turned into frontends for LLM, even when it's their job to know the answer to these types of questions. We're talking technical leads.

Seems like if all you do is forward questions to LLMs, maybe you CAN be replaced by a LLM.

RadiozRadioz
There was a brief period of time in the first couple weeks of ChatGPT existing where people did this all the time on Hacker News and were upvoted for it. I take pride in the fact that I thought it was cringeworthy from the start.
Szpadel
I find that only acceptable (only little annoying) when this is some lead in case we're we have no idea what could be the issue, it might help to brainstorm and note that this is not verified information is important.

most annoying is when people trust chatgpt more that experts they pay. we had case when our client asked us for some specific optimization, and we told him that it makes no sense, then he asked the other company that we cooperate with and got similar response, then he asked chatgpt and it told him it's great idea. And guess what, he bought $20k subscription to implement it.

hedora
I do this occasionally when it's time sensitive, and I cannot find a reasonable source to read. e.g., "ChatGPT says cut the blue wire, not the red one. I found the bomb schematics it claims say this, but they're paywalled."

If that's all the available information and you're out of time, you may as well cut the blue wire. But, pretty much any other source is automatically more trustworthy.

> when this is some lead in case we're we have no idea what could be the issue

English please

jacksnipe OP
We’re was autocorrected from where
Even with that it's nonsense
__turbobrew__
I had someone at work lead me down a wild goose chase because claude told them to do something which was outright wrong to solve some performance issues they were having in their app. I helped them do this migration and it turned put that claude’s suggestions made performance worse! I know for sure the time wasted on this task was not debited from the so called company productivity stats that come from AI usage.
mwigdahl
I do this, but it’s because I am evangelizing proper use of the tool to developers who don’t always understand what it can and can’t do.

Recently I used o3 to plan a refactoring related to upgrading the version of C++ we are using in our product. It pointed out that we could use a tool built in to VS 2022 to make a particular change automatically based on compilation output. I was not familiar with this tool and neither were the other developers on the team.

I did confirm its accuracy myself, but also made sure to credit the model as the source of information about the tool.

mrkurt
Wow that's a wildly cynical interpretation of what someone is saying. Maybe it's right, but I think it's equally likely that people are saying that to give you the right context.

If they're saying it to you, why wouldn't you assume they understand and trust what they came up with?

Do you need people to start with "I understand and believe and trust what I'm about to show you ..."?

jacksnipe OP
I do not need people to lead on that. That’s precisely why leading on “I asked ChatGPT and it said…” makes me trust something less — the speaker is actively assigning responsibility for what’s to come to some other agent, because for one reason or another, they won’t take it on themselves.
x3n0ph3n3
Thanks for this. It's a great response I intend to use going forward.
laweijfmvo
the problem is that when you ask a ChatBot something, it always gives you an answer...
I can see why this would be frustrating, but it's probably a good thing to have people be curious and consult an expert system.

Current systems are definitely flawed (incomplete, biased, or imagined information), but I'd pick the answers provided by Gemini over a random social post, blog page, or influencer every time.

This item has no comments currently.