Meaning, instead of listening to a real-life expert in the company telling them how to handle the problem they ignored my advice and instead dumped the garbage from GPT.
I really fear that a number of engineers are going to us GPT to avoid thinking. They view it as a shortcut to problem solve and it isn't.
Oh but it is, used wisely.
One: it's a replacement for googling a problem and much faster. Instead of spending half an hour or half a day digging through bug reports, forum posts, and stack overflow for the solution to a problem. LLMs are a lot faster, occasionally correct, and very often at least rather close.
Two: it's a replacement for learning how to do something I don't want to learn how to do. Case Study: I have to create a decent-enough looking static error page for a website. I could do an awful job with my existing knowledge, I could spend half a day relearning and tweaking CSS, elements, etc. etc. or I could ask an LLM to do it and then tweak the results. Five minutes for "good enough" and it really is.
LLMs are not a replacement for real understanding, for digging into a codebase to really get to the core of a problem, or for becoming an expert in something, but in many cases I do not want to, and moreover it is a poor use of my time. Plenty of things are not my core competence or anywhere near the goals I'm trying to achieve. I just need a quick solution for a topic I'm not interested in.
There are so many things that a human worker or coder has to do in a day and a lot of those things are non-core.
If someone is trying to be an expert on every minor task that comes across their desk, they were never doing it right.
An error page is a great example.
There is functionality that sets a company apart and then there are things that look the same across all products.
Error pages are not core IP.
At almost any company, I don't want my $200,000-300,000 a year developer mastering the HTML and CSS of an error page.
Sufficiently advanced orange juice extractor is the solution to any problem. Doesen't necessarily mean you should build the sufficient part.
>One: it's a replacement for googling a problem and much faster
This is more to do with the problem that google results have gone downhill very rapidly. It used to be you could find what you were looking for very fast and solve a problem.
>I could ask an LLM to do it and then tweak the results. Five minutes for "good enough" and it really is.
When the cost of failures is low, a hackjob can be economical, like a generated picture for entertainment or a static error page. Miscreating a support for a bridge it is not very economical
Let's just say not listening to someone and then complaining that doing something else didn't work isn't exactly new.
Yes, however typically if that's the case they will respond with some variant of "ChatGPT mentioned xyz so I started poking in that direction, does that make sense?" There is a markedly different response when people are using ChatGPT to try to understand better and that I have no issue with.
I get what you're suggesting but I don't think people are being malicious, it's more that the discussion has gotten too deep and they're exhausted so they'd rather opt out. In some cases yes it does mean the discussion could've been simplified, but sometimes when it's a pretty deep, technical reason it's hard to avoid.
A concrete example is we had to figure out a bug in some assembly code once and we were looking at a specific instruction. I didn't believe that instruction was wrong and I pointed at the docs suggesting it lined up with what we were observing it doing. Someone responded with "I asked ChatGPT and here's what it said: ..." without even a subsequent opinion on the output of ChatGPT. In fact, reading the output it basically restated what I said, but said engineer used that as justification to rewrite the instruction to something else. And at that point I was like y'know what, I just don't care enough.
Unsurprisingly, it didn't work, and the bug never got fixed because I lost interest in continuing the discussion too.
I think what you're describing does happen in good faith, but I think people also use the wall of text that ChatGPT produces as an indirect way to say "I don't care about your opinion on this matter anymore."
However, I have a very strong suspicion they also didn't understand the GPT output.
To flush out the situation a bit further, this was a performance tuning problem with highly concurrent code. This engineer was initially tasked with the problem and they hadn't bothered to even run a profiler on the code. I did, shared my results with them, and the first action they took with my shared data was dumping a thread dump into GPT and asking it where the performance issues were.
Instead, they've simply been littering the code with timing logs in hopes that one of them will tell them what to do.
Also, what is your history and position in the company? It seems odd that you'd get completely ignored by this supposed senior engineer (something that usually happens more often with overconfident juniors) if you have meaningful experience in the field and domain.
Yeah, this is the situation exactly, though I've known a few seniors that were senior just because they've hung around and not experience.
> what is your history and position in the company? It seems odd that you'd get completely ignored by this supposed senior engineer
Been with the company for over a decade at this point. I think I have a pretty good reputation generally. Someone sent me a "This is why cogman10 is the GOAT" message for some of my technical interactions on large public team chats.
Why I'm being ignored? I have a bunch of guesses but nothing I'm willing to share.
But I think my point still holds—it’s not the tool that should be blamed; the engineer just needs to better understand the tool and how/when to use it appropriately.
Of course, our toolboxes just keep filling up with new tools which makes it difficult to remember how to use ‘em all.
I doubt the reason has to do with your qualities as an engineer, which must be basically sound. Otherwise why bother to launder the product of your judgment, as you described here someone doing?
How is this sentiment not different from my grandfather’s sentiment that calculators and computers (and probably his grandfather’s view of industrialization) are a shortcut to avoid work? From my perspective most tools are used as a shortcut to avoid work; that’s kinda the while point—to give us room to think about/work on other stuff.
Thus I have to assume that for any topic I do not fully understand - which is the vast majority of human knowledge - it is worse than useless, it is actively misleading. I try to not even read much of what LLMs produce. I might give it some text and riff about it if I need ideas, but LLMs are categorically the wrong tool for factual content.
I would tend to agree with that assertion…
> it requires you to understand the specific model's training and happy paths
But I strongly disagree with that assertion; I know nothing of commercial models’ training corpus, methodology, or even their system prompts; I only know how to use them as a tool for various use-cases.
> it requires more time to make it output the thing you want than just doing it yourself.
And I strongly disagree with that one too. As long as the thing you want it to output is rooted in relatively mainstream or well-known concepts, it’s objectively much faster than you/we are; maybe it’s more expensive but it’s also crazy fast—which is the point of all tools—and the precision/accuracy of most speedy tools can be often deferred until a later step in the process.
> If you don't know enough about the subject or the model, you will get confident garbage
Once you step outside their comfort zone (their training), well, yah… they do all tend to be unduly confident in their responses—I’d argue however that it is a trait they learned from us; we really like to be confident even when we’re wrong and that trait is borne out dramatically across the internet sources on which a lot of these models were trained.
I get it; I’m not an AI evangelist and I get frustrated with the slop too; Gen-AI (and many of the tools we’ve enjoyed over the past few millennia) was/is lauded as “The” singular tool that makes everything better; no tool can fulfill that role yet we always try to shoehorn our problems into a shape that fits the tool. We just need to use the correct tools for the job; in my mind, the only problem right now is that we have a really capable tool and have identified some really valuable use-cases for that tool yet we also keep trying to use it for (what I believe are, given current capabilities) use-cases that don’t fit the tool.
We’ll figure it out but, in the meantime, while I don’t like to generalize that a tech or its use-cases are objectively good/bad, I do tend to have an optimistic outlook for most tech—Gen-AI included.
Yes is true there could have been a skill issue. But it could also be true that the person just wanted input from people rather than Google. So that's why I drew the connection.
1. *"If it's not worth writing, it's not worth reading"* is a normative or idealistic statement — it sets a standard or value judgment about the quality of writing and reading. It suggests that only writing with value, purpose, or quality should be produced or consumed.
2. *"There is a lot of handwritten crap"* is a descriptive statement — it observes the reality that much of what is written (specifically by hand, in this case) is low in quality, poorly thought-out, or not meaningful.
So, putting them together:
* The first expresses *how things ought to be*. * The second expresses *how things actually are*.
In other words, the existence of a lot of poor-quality handwritten material does not invalidate the ideal that writing should be worth doing if it's to be read. It just highlights a gap between ideal and reality — a common tension in creative or intellectual work.
Would you like to explore how this tension plays out in publishing or education?
It does NOT mean, AT ALL, that if it is worth writing, it is worth reading.
Logic 101?
It seems the initial rule seems rather worthless.
You know how I know the difference between something an AI wrote and something a human wrote? The AI knows the difference between "to" and "too".
I guess you proved your point.
I see email blasts suggesting I should be using it, I get peers saying I should be using it, I get management suggesting I should use it to cut costs… and there is some truth there but as usual, it depends.
I, like many others, can’t be asked to take on inefficiency in the name of efficiency ontop of currently most efficient ways to do my work. So I too say “ChatGPT said: …” because I dump lots of things into it now. Some things I can’t quickly verify, some things are off, and in general it can produce far more information than I have time to check. Saying “ChatGPT said…” is the current CYA caveat statement around the world of: use this thing but also take liability for it. No, if you practically mandate I use something, the liability falls on you or that thing. If it’s a quick verify I’ll integrate it into knowledge. A lot of things aren’t.
The ideal scenario: you write a few bulletpoints and ask Copilot to turn it into a long-form email to send out. Your receiving coworker then asks Copliot to distill it back into a few bullet points they can skim.
You saved 5 minutes but one of your points was ignored entirely and 20% of your output is nonsensical.
Your coworker saved 2 minutes but one of their bulletpoints was hallucinated and important context is missing from the others.
Microsoft collects a fee from both of you and is the only winner here.
"Hey, whatcha doin?"
"Oh hi, yea, this car has a slight misfire on cyl 4, so I was just pulling one of the coilpacks to-"
"Yea alright, that's great. So hey! You _really_ need to use this tool. Trust me, it's gonna make your life so much easier"
"umm... that's a 3d printer. I don't really think-"
"Trust me! It's gonna 10x your work!"
...
I love the tech. It's the evangelists that don't seem to bother researching the tech beyond making an account and asking it to write a couple scripts that bug me. And then they proclaim it can replace a bunch of other stuff they don't/haven't ever bothered to research or understand.
This is kind of the same with any AI gen art. Like I can go generate a bunch of cool images with AI too, why should I give a shit about your random Midjourney output.
Here's an example https://files.meiobit.com/wp-content/uploads/2024/11/22l0nqm...
Being dismissive of AI art is like those people who dismiss electronic music because there's a drum machine.
Doing things well still requires an immense amount of skill and exhaustive amount of effort. It's wildly complicated
Photographers are not painters.
People who do modular synths aren't guitarists.
Technical DJing is quite different from tapping on a Spotify app on a smartphone.
Just because you've exclusively exposed yourself to crude implementations doesn't mean sophisticated ones don't exist.
People aren't trying to push photographs into painted works displays
People who do modular synths aren't typically trying to sell their music as country/rock/guitar based music.
A 3D modeler of a statue isn't pretending to be a sculpturist.
People pushing AI art are trying to slide it right into "human art" displays. Because they are talentless otherwise.
It took a solid hundred years to legitimate photography as an artistic medium, right? To the extent that the controversy still isn’t entirely dead?
Any cool images I ask AI for are going to involve a lot less patience and refinement than some of these things the kids are using AI to turn out…
For that matter, I’ve watched friends try to ask for factual information from LLMs and found myself screaming inwardly at how vague and counterproductive their style of questioning was. They can’t figure out why I get results I find useful while they get back a wall of hedging and waffling.
Not really.
"In 1853 the Photographic Society, parent of the present Royal Photographic Society, was formed in London, and in the following year the Société Française de Photographie was founded in Paris."
https://www.britannica.com/technology/photography/Photograph...
It’s been depressingly long since school, but am I wrong in vaguely remembering the controversy stretching through Art in the Age of Mechanical Reproduction and well into the Warhol era?
https://news.harvard.edu/gazette/story/2010/10/when-photogra...
And I guess legitimacy doesn’t fully depend on the whims of museums and collectors, but to hear Christie’s tell it, they didn’t start treating the medium as fine art until 1972–and then, almost more as antiquities than as works of art—
https://www.christies.com/en/stories/how-photography-became-...
In much the same way as there are tons of Polaroids that are not art and a few that unambiguously are (e.g. [0]); there’s a lot of lazy AI imagery, but there also seem to be some unambiguously artful endeavors (e.g. [1]), no?
[0] https://stephendaitergallery.com/exhibitions/dawoud-bey-pola...
If you're just parroting what you read, what is it that you do here?!
- I had to Google it...
- According to a StackOverflow answer...
- Person X told me about this nice trick...
- etc.
Stating your sources should surely not be a bad thing, no?
Just do the research, and you don't have to qualify it. "GPT said that Don Knuth said..." Just verify that Don said it, and report the real fact! And if something turns out to be too difficult to fact check, that's still valuable information.
I don't think I've ever seen anyone lambasted for citing stackoverflow as a source. At best, they chastised for not reading the comments, but nowhere as much pushback as for LLMs.
Also, using Stack Overflow correctly requires more critical thinking. You have to determine whether any given question-and-answer is actually relevant to your problem, rather than just pasting in your code and seeing what the LLM says. Requiring more work is not inherently a good thing, but it does mean that if you’re citing Stack Overflow, you probably have a somewhat better understanding of whatever you’re citing it for than if you cited an LLM.
> Not to mention that you are technically not allowed to just copy-paste stuff from SO.
Sure you can. Over the last ten years, I have probably copied at least 100 snippets of code from StackOverflow in my corporate code base (and included a link to the original code). The stuff that was published before Generation AI Slop started is unbeatable as a source of code snippets. I am a developer for internal CRUD apps, so we don't care about licenses (except AGPL due to FUD by legal & compliance teams). Anything goes because we do not distribute our software externally.If anything, SO having verified answers helps its credibility slightly compared to a LLM which are all known to regularly hallucinate (see: literally this post).
And all the other examples will have a chain of "upstream" references, data and discussion.
I suppose you can use those same phrases to reference things without that, random "summaries" without references or research, "expert opinion" from someone without any experience in that sector, opinion pieces from similarly reputation-less people etc. but I'd say they're equally worthless as references as "According to GPT...", and should be treated similarly.
Copy and pasting from ChatGPT has the same consequences as copying and pasting from StackOverflow, which is to say you're now on the hook supporting code in production that you don't understand.
I can use ChatGPT to teach me and understand a topic or i can use it to give me an answer and not double check and just copy paste.
Just shows off how much you care about the topic at hand, no?
Starting the answer with "I asked ChatGPT and it said..." almost 100% means the poster did not double-check.
(This is the same with other systems: If you say, "According to Google...", then you are admitting you don't know much about this topic. This can occasionally be useful, but most of the time it's just annoying...)
"I asked X and it said..." is an appeal to authority and suspect on its face whether or not X is an LLM. But when it's an LLM, then it's even worse. Presumably, the reason for the appeal is because the person using it considers the LLM to be an authoritative or meaningful source. That makes me question the competence of the person saying it.
> Something that really frustrates me about interacting with
Something that frustrates me with LLMs is that they are optimized such that errors are as silent as possible.It is just bad design. You want errors to be as loud as possible. So they can be traced and resolved. On the other hand, LLMs optimize human preference (or some proxy of this). While humans prefer accuracy, it would be naive to ignore all the other things that optimize this objective. Specifically, humans prefer answers that they don't know are wrong over those that they do know are wrong.
This doesn't make LLMs useless but certainly it should strongly inform how we use them. Frankly, you cannot trust outputs, so you have to verify. I think this is where there's a big divergence between LLM users (and non-users). Those that blindly trust and those that don't (extreme case is non-users). If you need to constantly verify AND recognize that verification is extra hard (because it is optimized to be invisible to you), it can create extra work, not less.
It really is two camps and I think it says a lot:
- "Blindly" trust
- "Trust" but verify
Wide range of opinions in these two camps, but I think it comes down to some threshold of default trust or default suspicion.Seems like if all you do is forward questions to LLMs, maybe you CAN be replaced by a LLM.
most annoying is when people trust chatgpt more that experts they pay. we had case when our client asked us for some specific optimization, and we told him that it makes no sense, then he asked the other company that we cooperate with and got similar response, then he asked chatgpt and it told him it's great idea. And guess what, he bought $20k subscription to implement it.
If that's all the available information and you're out of time, you may as well cut the blue wire. But, pretty much any other source is automatically more trustworthy.
Recently I used o3 to plan a refactoring related to upgrading the version of C++ we are using in our product. It pointed out that we could use a tool built in to VS 2022 to make a particular change automatically based on compilation output. I was not familiar with this tool and neither were the other developers on the team.
I did confirm its accuracy myself, but also made sure to credit the model as the source of information about the tool.
If they're saying it to you, why wouldn't you assume they understand and trust what they came up with?
Do you need people to start with "I understand and believe and trust what I'm about to show you ..."?
Current systems are definitely flawed (incomplete, biased, or imagined information), but I'd pick the answers provided by Gemini over a random social post, blog page, or influencer every time.