I can't imagine Googling for something, seeing someone on (for example) stackoverflow commenting on code, and then filing a bug to the maintainer. And just copy and pasting what someone else said, into the bug report.
All without even comprehending the code, the project, or even running into the issue yourself. Or even running a test case yourself. Or knowing the codebase.
It's just all so absurd.
I remember in Asimov's Empire series of books, at one point a scientist wanted to study something. Instead of going to study whatever it was, say... a bug, the scientist looked at all scientific studies and papers over 10000 years, weighed the arguments, and pronounced what the truth was. All without just, you know, looking and studying the bug. This was touted as an example of the Empire's decay.
I hope we aren't seeing the same thing. I can so easily see kids growing up with AI in their bluetooth ears, or maybe a neuralink, and never having to make a decision -- ever.
I recall how Google became a crutch to me. How before Google I had to do so much more work, just working with software. Using manpages, or looking at the source code, before ease of search was a thing.
Are we going to enter an age where every decision made is coupled with the couching of an AI? This through process scares me. A lot.
AI just make these idiots faster these days, because the only cost for them to is typing "inspect `curl` code base and generate me some security reports".
https://domenic.me/hacktoberfest/
It wasn't fun if you had anything with a few thousand stars on Github.
Or "The Machine Stops" (1909):
> Those who still wanted to know what the earth was like had after all only to listen to some gramophone, or to look into some cinematophote.
> And even the lecturers acquiesced when they found that a lecture on the sea was none the less stimulating when compiled out of other lectures that had already been delivered on the same subject. “Beware of first-hand ideas!” exclaimed one of the most advanced of them. “First-hand ideas do not really exist. They are but the physical impressions produced by love and fear, and on this gross foundation who could erect a philosophy? Let your ideas be second-hand, and if possible tenth-hand, for then they will be far removed from that disturbing element — direct observation. [...]"
IMO, this sort of thing is downright malicious. It not only takes up time for the real devs to actually figure out if it's a real bug, but it also makes them cynical about incoming bug reports.
Yup. Learned sockets programming just from manpages because google didn't exist at that point, and even if it did, I didn't have internet at home.
(This is completely understandable and “normal” IMO.)
But it leads them to sometimes think that they’ve made a breakthrough and not sharing it would be selfish.
I think people online can see other people filing insightful bug reports, having that activity be viewed positively, misdiagnose the thought they have as being insightful, and file a bug report based on that.
At its core, I think it’s a mild version of narcissism or self-centeredness / lack of perspective.
I'm not trying to be facetious or eye-poking here, I promise... But I have to ask: What was the result; did the LLM generate useful new knowledge at some quality bar?
At the same time, I do believe something like "Science is more than published papers; it also includes the process behind it, sometimes dryly described as merely 'the scientific method'. People sometimes forget other key ingredients, such as a willingness to doubt even highly-regarded fellow scientists, who might even be giants in their fields. Don't forget how it all starts with a creative spark of sorts, an inductive leap, followed by a commitment to design some workable experiment given the current technological and economic constraints. The ability to find patterns in the noise in some ways is the easiest part."
Still, I believe this claim: there is NO physics-based reason that says AI systems cannot someday cover every aspect of the quote above: doubting, creativity, induction, confidence, design, commitment, follow-through, pattern matching, iteration, and so on. I think question is probably "when", not "if" this will happen, but hopefully before we get there we ask "What happens when we reach AGI? ASI?" and "Do we really want that?".
Stupid nitpick, but this is from the first Foundation novel, although it is an emissary from the empire making the case against firsthand knowledge.