The article is all about that oversight. It ends with a ten point checklist with items such as "Did I treat GenAI as a thought partner—not a source of truth?".
So weak! No matter how good a model gets it will always present information with confidence regardless of whether or not it's correct. Anyone that has spent five minutes with the tools I knows this.
I’ve read enough pseudo-intellectual Internet comments that I tend to subconsciously apply a slight negative bias to posts that appear to try too hard to project an air of authority via confidence. It isn’t always the best heuristic, as it leaves out the small set of competent and well-marketed people. But it certainly deflates my expectations around LLM output.
OSINT (not a term I was particularly familiar with, personally) actually goes back quite a ways[1]. Software certainly makes aggregating the information easier to accumulate and finding signal in the noise, but bad security practices do far more to make that information accessible.
[1] https://www.tandfonline.com/doi/full/10.1080/16161262.2023.2...
Back in the 1990s my boss went to a conference where there was a talk on OSINT.
She was interested in the then-new concept of "open source" so went to the talk, only to find it had nothing to do with software development.
OSINT only exists because of internet capabilities and google search - ie someone had to learn how to use those new tools just a few years ago and apply critical thinking
AI tools and models are rapidly evolving and more in depth capabilities appearing in the models, all this means the tools are hardly set in stone and the workflows will evolve with them - it’s still up to human oversight to evolve with the tools - the skills of human overseeing AI is something that will develop too