But the real problem with docs is that for MOST usecases, the audience and context of the readers matter HUGELY. Most docs are bad because we can't predict those. People waste ridiculous amounts of time writing docs that nobody reads or nobody needs based on hypotheses about the future that turn out to be false.
And _that_ is completely different when you're writing context-window documents. These aren't really documents describing any codebase or context within which the codebase exists in some timeless fashion, they're better understood as part of a _current_ plan for action on a acute, real concern. They're battle-tested the way docs only rarely are. And as a bonus, sure, they're retainable and might help for the next problem too, but that's not why they work; they work because they're useful in an almost testable way right away.
The exceptions to this pattern kind of prove the rule - people for years have done better at documenting isolatable dependencies, i.e. libraries - precisely because those happen to sit at boundaries where it's both easier to make decent predictions about future usage, and often also because those docs might have far larger readership, so it's more worth it to take the risk of having an incorrect hypothesis about the future wasting effort - the cost/benefit is skewed towards the benefit by sheer numbers and the kind of code it is.
Having said that, the dust hasn't settled on the best way to distill context like this. It's be a mistake to overanalyze the current situation and conclude that documentation is certain to be the long-term answer - it's definitely helpful now, but it's certainly conceivable that more automated and structured representations might emerge, or in forms better suited for machine consumption that look a little more alien to us than conventional docs.
They are useful to the LLM in writing the code (which comes after).
But when it comes to an LLM reading that code later its just a waste of context.
For humans its a waste of screen space.
A comment should only explain what the following thing does if its hard to parse for some reason.
Otherwise it should add information: why something is as it is, I.e. some special case, add breadcrumbs to other bits of the code etc.
I wish these coding agents had a post step to remove any LLMish comments they added during writing, and I want linters that flag these.
There's a piece of common knowledge that NBA basketball players can all hit over 90% on free throws, if they shot underhand (granny style). But for pride reasons, they don't throw underhand. Shaq just shot 52%, even though it'd be free points if he could easily shoot better.
I suspect there's similar things in software engineering. I've seen plenty of comments on HN about "adding code comments like a junior software engineer" or similar sentiment. Sure, there's legitimate gripes about comments (like how they can be misleading if you update the code without changing the comment, etc), but I strongly suspect they increase comprehension of code overall.
Personally, I remove redundant comments AI adds specifically to demonstrate that I have reviewed the code and believe that the AI's description is accurate. In many cases AIs will even add comments that only make sense as a response to my prompt and don't make any sense in-context.
Comments may go out of date, but LLMs can quickly generate comments that are up to date. LLMs are more likely to prevent the situation that you described.
> In many cases AIs will even add comments that only make sense as a response to my prompt and don't make any sense in-context.
Yeah, this is a common issue to be fair.
Say I wrote a specific comment why this fencepost here needs special consideration. The agent will come through and replace that reasoned comment with "Add one to index".
most of the times when LLM is misbehaving, it's my fault for leaving outdated instructions
If you are a developer who is not writing the documents for consumption by AI, you are primarily writing documents for someone who is not you; you do not know what this person will need or if they will ever even look at them.
They may, of course, help you, but you may not understand that, have the time, or discipline.
If you are writing them because the AI using them will help you, you have a very strong and immediate incentive to document the necessary information. You also have the benefit of a short feedback loop.
Side note, thanks to the LLMs penchant of wiping out comments, I have a lot more docs these days and far fewer comments.