I could break most passwords of an internal company application by googling the SHA1 hashes.
It was possible to reliably identify plants or insects by just googling all the random words or sentences that would come to mind describing it.
(None of that works nowadays, not even remotely)
We have a habit of finding efficiencies in our processes, even if the original process did work.
The "plain" Google Search before LLM never had the capability to copy&paste an entire lengthy stack trace (e.g. ~60 frames of verbose text) because long strings like that exceeds Google's UI. Various answers say limit of 32 words and 5784 characters: https://www.google.com/search?q=limit+of+google+search+strin...
Before LLM, the human had to manually visually hunt through the entire stack trace to guess at a relevant smaller substring and paste that into Google the search box. Of course, that's do-able but that's a different workflow than an LLM doing it for you.
To clarify, I'm not arguing that the LLM method is "better". I'm just saying it's different.
But I did it subconsciously. I never thought of it until today.
Another skill that LLM use can kill? :)
Which is never? Do you often just lie to win arguments? LLM gives you a synthesized answer, search engine only returns what already exists. By definition it can not give you anything that is not a super obvious match
In my experience it was "a lot". Because my stack traces were mostly hardware related problems on arm linux in that period.
But I suppose your stack traces were much different and superior and no one can have stack traces that are different from yours. The world is composed of just you and your project.
> Do you often just lie to win arguments?
I do not enjoy being accused of lying by someone stuck in their own bubble.
When you said "Which is never" did you lie consciously or subconsciously btw?
Whatever it is specifically, the idea that you could just paste a 600 line stack trace unmodified into google, especially "way before AI" and get pointed to the relevant bit for your exact problem is obviously untrue.
Pasting stack traces and kernel oopses hasn't worked in quite a while, I think. It's very possible that the maximum query was longer in the past.
2000 characters is also more than a double spaced manuscript page as defined by the book industry (which seems to be about 1500). You can fit the top of a stack trace in there. And if you're dealing with talking to hardware, the top can be enough.
And indeed, in the early days the maximum query length was 10 words. So no, you have never been able to paste an entire stack trace into google and magically get a concise summary.
If you are changing the original claim that you were responding to to "I can do my job without llms if I have google search" Sure of course anyone can. But you can't use that to dismiss that some people find it quite convenient to just dump the entire stack trace into a text chat and have a decent summary of what is important without having to read a single part of it.
Very few devs bother to post stack traces (or generally any programming question) online. They only do that when they're stuck so badly.
Most people work out their problem then move on. If no one posts about it your search never hits.
Okay, maybe sometimes the post about the stack trace was in Chinese, but a plain search used to be capable of giving the same answer as a LLM.
It's not that LLMs are better, it's search that got entshittified.