Preferences

emil_sorensen parent
OP here. It's kind of ironic that making the docs AI-friendly essentially just ends up being what good documentation is in the first place (explicit context and hierarchy, self-contained sections, precise error messages).

shafyy
It's the same for SEO also. Good structure, correct use of HTML elements, quick loading, good accessibility, etc. Sure, there are "tricks" to improve your SEO, but the general principles are also good if you were not doing SEO.
And yet in practice SEO slop garbage is SEO slop garbage. Devoid of any real meaning or purpose other than to increase rankings and metrics. Nobody cares if it’s good or useful, but it must appease the algorithm!
bobbiechen
Related: "If an AI agent can't figure out how your API works, neither can your users" (from my employer's blog)

https://stytch.com/blog/if-an-ai-agent-cant-figure-out-how-y...

Yeah, I've started to think AI smoke tests for cognitive complexity should be a fundamental part of API/schema design now. Even if you think the LLMs are dumb, Stupidity as a Service is genuinely useful.
truculent
Is this you have implemented in practice? Sounds like a great idea, but I have no idea how you would make it work it a structured way (or am I missing the point…?)
Can be easy depending on your setup - you can basically just write high level functional tests matching use cases of your API, but as prompts to a system with some sort of tool access, ideally MCP. You want to see those tests pass, but you want them to pass with the simplest possible prompt (a sort of regularization penalty, if you like). You can mutate the prompts using an LLM if you like to try different/shorter phrasings. The Pareto front of passing tests and prompt size/complexity is (arguably) how good a job you're doing structuring/documenting your API.
truculent
Lovely idea - thanks
Cthulhu_
It's a good tool to use for code reviewing, especially if you don't have peers with Strong Opinions on it.

Which is another issue, indifference. It's hard to find people that actually care about things like API design, let alone multiple that check each other's work. In my experience, a lot of the time people just get lazy and short-circuit the reviews to "oh he knows what he's doing, I'm sure he thought long and hard about this".

jilles
It's similar for writing code. Suddenly people are articulating their problems to the LLM and breaking it down in smaller sub-problems to solve....
arscan
In other words, people are discovering the value of standard software engineering practices. Which, I think is a good thing.
corysama
It has changed how I structure my code. Out of laziness, if I can write the code in such a way that each step follows naturally from what came before, "the code just writes itself!" Except now it's literally true :D
appreciatorBus
Maybe everyone already discovered this but I find that if I include a lot of detail in my variables names, it's much more likely to autocomplete something useful. If whatever I typed was too verbose for my liking long term, I can always clean it up later with a rename.
Reminds me of that Asimov story where the main character was convinced that some public figure was a robot, and kept trying to prove it. Eventually they concluded that it was impossible to tell whether they were actually a robot "or merely a very good man."
starkparker
From a docs-writing perspective, I've noticed that LLMs in their current state mostly solve the struggle of finding users who both want to participate in studies, are mostly literate, and are also fundamentally incompetent
troupo
Also it makes it human-accessible, too. There are now projects converting Apple's JS-heavy documentation sites to markdown for AI consumption.
Thank you for sharing this, it's really helpful to have this as top-down learning resource.

I'm in the process of learning how to work with AI, and I've been homebrewing something similar with local semantic search for technical content (embedding models via Ollama, ChromaDB for indexing). I'm currently stuck at the step of making unstructured knowledge queryable, so these docs will come in handy for sure. Thanks again!

esafak
Now people just have a better incentive :)
mooreds
"GEO[0] has entered the chat."

We see a surprising number of folks who discover our product from GenAI solutions (self-reported). I'm not aware of any great tools that help you dissect this, but I'm sure someone is working on them.

0: Generative Engine Optimization

nlawalker
Honest question - what do you mean? What's the better incentive?
esafak
The documentation is now not just for other people, but for your own productivity. If it weren't for the LLM, you might not bother because the knowledge is in your memory. But the LLM does not have access to that yet :)

It's a fortunate turn of events for people who like documentation.

drusepth
This is also the hilarious part of "prompt engineering".

It's just effective linguistics and speech; what people have called "soft skills" forever is now, obviously, trying to be a science for some reason.

A really effective prompt is created by developing an accurate “mental model” of the model, understanding what tools it does and doesn’t have access to, what gives it effective direction and what leads it astray

Otherwise known as empathy

Cthulhu_
It's a bit different though; the soft skills you mention are usually realtime or a chore that people don't like doing (writing down specifications / requirements), whereas "prompt engineering" puts people in their problem solving mental mode not dissimilar to writing code.

(assumption / personal theory)

alganet
I can divide the suggestions into two categories:

1. Stuff that W3C already researched and defined 20 years ago to make the web better. Acessibility, semantic simple HTML that works with no JS, standard formats. All the stuff most companies just plain ignored or sidelined.

2. Suggestions to workaround obvious limits on current LLM tech (context size, ambiguity, etc).

There's really nothing to talk about category 1, except that a lot of people already said this and they were practically mocked.

Regarding category 2, it's the first stage of AI failure acceptance. "Ok, it can't reliably reason on human content. But what if we make humans write more dumb instead?"

This item has no comments currently.