https://stytch.com/blog/if-an-ai-agent-cant-figure-out-how-y...
Which is another issue, indifference. It's hard to find people that actually care about things like API design, let alone multiple that check each other's work. In my experience, a lot of the time people just get lazy and short-circuit the reviews to "oh he knows what he's doing, I'm sure he thought long and hard about this".
I'm in the process of learning how to work with AI, and I've been homebrewing something similar with local semantic search for technical content (embedding models via Ollama, ChromaDB for indexing). I'm currently stuck at the step of making unstructured knowledge queryable, so these docs will come in handy for sure. Thanks again!
We see a surprising number of folks who discover our product from GenAI solutions (self-reported). I'm not aware of any great tools that help you dissect this, but I'm sure someone is working on them.
0: Generative Engine Optimization
It's just effective linguistics and speech; what people have called "soft skills" forever is now, obviously, trying to be a science for some reason.
Otherwise known as empathy
(assumption / personal theory)
1. Stuff that W3C already researched and defined 20 years ago to make the web better. Acessibility, semantic simple HTML that works with no JS, standard formats. All the stuff most companies just plain ignored or sidelined.
2. Suggestions to workaround obvious limits on current LLM tech (context size, ambiguity, etc).
There's really nothing to talk about category 1, except that a lot of people already said this and they were practically mocked.
Regarding category 2, it's the first stage of AI failure acceptance. "Ok, it can't reliably reason on human content. But what if we make humans write more dumb instead?"