Preferences

Isn't the struggling with docs and learning how and where to find the answers part of the learning process?

I would argue a machine that short circuits the process of getting stuck in obtuse documentation is actually harmful long term...


Isn't the struggle of sifting through a labyrinth of physical books and learning how and where to find the right answers part of the learning process?

I would argue a machine that short-circuits the process of getting stuck in obtuse books is actually harmful long term...

It may well be. Books have tons of useful expository material that you may not find in docs. A library has related books sitting in close proximity to one another. I don't know how many times I've gone to a library looking for one thing but ended up finding something much more interesting. Or to just go to the library with no end goal in mind...
Speaking as a junior, I’m happy to do this on my own (and do!).

Conversations like this are always well intentioned and friction truly is super useful to learning. But the ‘…’ in these conversations seems to always be implicating that we should inject friction.

There’s no need. I have peers who aren’t interested in learning at all. Adding friction to their process doesn’t force them to learn. Meanwhile adding friction to the process of my buddies who are avidly researching just sucks.

If your junior isn’t learning it likely has more to do with them just not being interested (which, hey, I get it) than some flaw in your process.

Start asking prospective hires what their favorite books are. It’s the easiest way to find folks who care.

I’ll also make the observation that the extra time spent is very valuable if your objective solely is learning, but often the Business™ needs require something working ASAP
It's not that friction is always good for learning either though. If you ever prepared course materials, you know that it's important to reduce friction in the irrelevant parts, so that students don't get distracted and demotivated and time and energy is spent on what they need to learn.

So in principle Gen AI could accelerate learning with deliberate use, but it's hard for the instructor to guide that, especially for less motivated students

You're reading a lot into my ellipsis that isn't there. :-)

Please read it as: "who knows what you'll find if you take a stop by the library and just browse!"

I admire your attitude and the clarity of your thought.

It’s not as if today’s juniors won’t have their own hairy situations to struggle through, and I bet those struggles will be where they learn too. The problem space will present struggles enough: where’s the virtue in imposing them artificially?

This should be possible online, it would be if more journals were open access.
Disagree, actually. Having spent a lot of time publishing papers in those very journals, I can tell you that just browsing a journal is much less conducive to discovering a new area to dive into than going to a library and reading a book. IME, books tend to synthesize and collect important results and present them in an understandable (pedagogical?!) way that most journals do not, especially considering that many papers (nowadays) are written primarily to build people's tenure packets and secure grant funding. Older papers aren't quite so bad this way (say, pre-2000).
I've done professional ghostreading for published nonfiction authors. Many such titles are literally a synthesis of x-number of published papers and books. It is all an industry of sorts.
I think I don’t disagree. Only, it would at least be easier to trace the research concept you are interested in up to a nice 70’s paper or a textbook.
You could make much the same observation about online search results.
> It may well be. Books have tons of useful expository material that you may not find in docs

Books often have the "scam trap" where highly-regarded/praised books are often only useful if you are already familiar with the topic.

For example: i fell for the scam of buying "Advanced Programming in the unix environment" and a lot of concept are only shown but not explained. Wasted money, really. It's one of those book i regret not pirating before buying, really.

At the end of the day, watching some youtube video and then referencing the OS-specific manpage is worth much more than reading that book.

I suspect the case to be the same for other "highly-praised" books as well.

When I first opened QBasic, <N> years ago, when I was a wee lad, the online QBasic help didn't replace my trusty qbasic book (it supplemented it, maybe), nor did it write the programs for me. It was just there, doing nothing, waiting for me to press F1.

AI, on the other hand...

I couldn't make head nor tails of the QBasic help back in the day. I wanted to. I remember reading the sections about integers and booleans and trying to make sense out of them. I think I did manage to figure out how to use subroutines eventually, but it took quite a lot of time and frustration. I wish I'd had a book... or a deeper programming class. The one I had never went further than loops. No arrays, etc.

</resurgent-childhood-trauma>

You posted this in jest but it's literally true. You need to read the whole book to get the context. You SHOULD be reading the manuals and the docs. They weren't written because they're fun.
I'm not sure what you are trying to say here, or if you are trying to somehow diminish my statement by somehow claiming that online documentation is causing the same magnitude of harm compared to using a book?

Two things:

1 - I agree with you. A good printed resource is incredibly valuable and should be perfectly valid in this day and age.

2 - many resources are not in print, e.g. API docs, so I'm not sure how books are supposed to help here.

It’s an interesting question isn’t it? There are obvious qualities about being able to find information quickly and precisely. However, the search becomes much narrower, and what must inevitably result is a homogeneity of outcomes.

Eventually we will have to somehow convince AI of new and better ways of doing things. It’ll be propaganda campaigns waged by humans to convince God to deploy new instructions to her children.

> inevitably result is a homogeneity of outcomes

And this outcome will be obvious very quickly for most observers won't it? So, the magic will occur by pushing AI beyond another limit or just have people go back to specialize on what eventually will becoming boring and procedural until AI catches up

Well, yes -- this is why I still sit down and read the damn books. The machine is useful to refresh my memory.
learning to learn
I recall similar arguments being made against search engines: People who had built up a library of internal knowledge about where and how to find things didn't like that it had become so easy to search for resources.

The arguments were similar, too: What will you do if Google goes down? What if Google gives the wrong answer? What if you become dependent on Google? Yet I'm willing to bet that everyone reading this uses search engines as a tool to find what they need quickly on a daily basis.

I argue that there is a strong, strong benefit to reading the docs: you often pick up additional context and details that would be missing in a summary.

Microsoft docs are a really good example of this where just looking through the ToC on the left usually exposes me to some capability or feature of the tooling that 1) I was not previously aware of and 2) I was not explicitly searching for.

The point is that the path to a singular answer can often include discovery of unrelated insight along the way. When you only get the answer to what you are asking, you lose that process of organic discovery of the broader surface area of the tooling or platform you are operating in.

I would liken AI search/summaries to visiting only the well-known, touristy spots. Sure, you can get shuttled to that restaurant or that spot that everyone visits and posts on socials, but in traveling that way, you will miss all of the other amazing food, shops, and sights along the way that you might encounter by walking instead. Reading the docs is more like exploring the random nooks and crannies and finding experiences you weren't expecting and ultimately knowing more about the place you visited than if you had only visited the major tourist destinations.

As a senior-dev, I have generally a good idea of what to ask for because I have built many systems and learned many things along the way. A junior dev? They may not know what to ask for and therefore, may never discover those "detours" that would yield additional insights to tuck into the manifolds of their brains for future reference. For the junior dev, it's like the only trip they will experience is one where they just go to the well known tourist traps instead of exploring and discovering.

I have been online since 1993 on Usenet. That was definitely not a widespread belief. We thought DejaNews was a godsend.
It's possible those arguments are correct. I wouldn't give up Google and SO, but I suspect I was learning faster when my first stop was K&R or a man page. There's a lot of benefit in building your own library of knowledge instead of cribbing from someone else's.

Of course no-one's stopping a junior from doing it the old way, but no-one's teaching them they can, either.

No, trying stuff out is the valuable process. How I search for information changed (dramatically) in the last 20 years I've been programming. My intuition about how programs work is still relevant - you'll still see graybeards saying "there's a paper from 70s talking about that" for every "new" fad in programming, and they are usually right.

So if AI gets you iterating faster and testing your assumptions/hypothesis I would say that's a net win. If you're just begging it to solve the problem for you with different wording - then yeah you are reducing yourself to a shitty LLM proxy.

The naturally curious will remain naturally curious and be rewarded for it, everyone else will always take the shortest path offered to complete the task.
> The naturally curious will remain naturally curious and be rewarded for it

Maybe. The naturally curious will also typically be slower to arrive at a solution due to their curiosity and interest in making certain they have all the facts.

If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?

> If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?

It's always possible to go slower (with diminishing benefits).

Or I think putting it in terms of benefits and risks/costs: I think it's fair to have "fast with shallow understanding" and "slower but deeper understanding" as different ends of some continuum.

I think what's preferable somewhat depends on context & attitude of "what's the cost of making a mistake?". If making a mistake is expensive, surely it's better to take an approach which has more comprehensive understanding. If mistakes are cheap, surely faster iteration time is better.

The impact of LLM tools? LLM tools increase the impact of both cases. It's quicker to build a comprehensive understanding by making use of LLM tools, similar to how stuff like autocompletion or high-level programming languages can speed up development.

> learning how and where to find the answers part of the learning process?

Yes. And now you can ask the AI where the docs are.

The struggling is not the goal. And rest assured there are plenty of other things to struggle with.

The thing is you need both. You need to have periods where you are reading through the docs learning random things and just expanding your knowledge, but the time to do that is not when you are trying to work out how to get a string into the right byte format and saved in the database as a blob (or whatever it is). Documentation has always has lots of different uses and the one that gets you answers to direct questions has improved a bit but its not really reliable yet so you are still going to have to check it.
The problem isn't that AI makes obtuse documentation usable. It's that it makes good documentation unread.

There's a lot of good documentation where you learn more about the context of how or why something is done a certain way.

I think if this were true, then individualized mastery learning wouldn't prove to be so effective

https://en.wikipedia.org/wiki/Mastery_learning

Except none of us have a master teaching and verifying our knowledge on how to use a library. And AI doesn’t do that either.
The best part is when the AI just makes up the docs
It really depends on what's being learned. For example, take writing scripts based on the AWS SDK. The APIs documentation is gigantic (and poorly designed, as it takes ages to load the documentation of each entry), and one uses only a tiny fraction of the APIs. I don't find "learning to find the right APIs" a valuable knowledge; rather, I find "learning to design a (small) program/script starting from a basic example" valuable, since I waste less time in menial tasks (ie. textual search).
> It really depends on what's being learned.

Also the difference between using it to find information versus delegating executive-function.

I'm afraid there will be a portion of workers who crutch heavily on "Now what do I do next, Robot Soulmate?"

No :)

Any task has “core difficulty” and “incidental difficulty”. Struggling with docs is incidental difficulty, it’s a tax on energy and focus.

Your argument is an argument against the use of Google or StackOverflow.

Not really. There’s a pattern to reading docs, just like there’s a pattern to reading code. Once you grasped it, your speed increase a lot. The slowness that junior has is a lack of understanding.

Complaining about docs is like complaining about why research article is not written like elementary school textbooks.

If the docs are poorly written then your not learning anything except how to control frustration
Struggling with poorly organized docs seems entirely like incidental complexity to me. Good learning resources can be both faster and better pedagogically. (How good today's LLM-based chat tools are is a totally separate question.)
Nobody said anything about poorly organized docs. Reading well structured and organized complex material is immensely difficult. Anyone who’s read Hegel can attest to that.

And yet I wouldn’t trust a single word coming out of the mouth of someone who couldn’t understand Hegel so they read an AI summary instead.

There is value in struggling through difficult things.

Why?

If you can just get to the answer immediately, what’s the value of the struggle?

Research isn’t time coding. So it’s not making the developer less familiar with the code base she’s responsible for. Which is the usual worry with AI.

Disagree. While documentation is often out of date, the threshold for maintaining it properly has been lowered, so your team should be doing everything it can to surface effective docs to devs and AIs looking for them. This, in turn, also lowers the barrier to writing good docs since your team's exposure to good docs increases.

If you read great books all the time, you will find yourself more skilled at identifying good versus bad writing.

Feel free to waste your time sifting through a dozen wrong answers. Meanwhile the rest of us can get the answers, absorb the right information quickly then move on to solving more problems.
And you will have learned nothing in the process. Congratulations, you are now behind your peer who "wasted his time" but actually knows stuff which he can lean on in the future.
This is a wrong take. People learn plenty while using AI. it's how you use it that matters. Same issue happened years ago if you just copied stack overflow without understanding what you were doing.

Its no different now, just the level of effort required to get the code copy is lower.

Whenever I use AI I sit and read and understand every line before pushing. Its not hard. I learn more.

Yes, it is. And yes, it absolutely is harmful.
1965: learning how to punch your own punch cards is part of the learning process

1995: struggling with docs and learning how and where to find the answers part of the learning process

2005: struggling with stackoverflow and learning how to find answers to questions that others have asked before quickly is part of the learning process

2015: using search to find answers is part of the learning process

2025: using AI to get answers is part of the learning process

...

This is both anachronistic and wrong.

To the extent that learning to punch your own punch cards was useful, it was because you needed to understand the kinds of failures that would occur if the punch cards weren't punched properly. However, this was never really a big part of programming, and often it was off-loaded to people other than the programmers.

In 1995, most of the struggling with the docs was because the docs were of poor quality. Some people did publish decent documentation, either in books or digitally. The Microsoft KB articles were helpfully available on CD-ROM, for those without an internet connection, and were quite easy to reference.

Stack Overflow did not exist in 2005, and it was very much born from an environment in which search engines were in use. You could swap your 2005 and 2015 entries, and it would be more accurate.

No comment on your 2025 entry.

> To the extent that learning to punch your own punch cards was useful, it was because you needed to understand the kinds of failures that would occur if the punch cards weren't punched properly. However, this was never really a big part of programming, and often it was off-loaded to people other than the programmers.

I thought all computer scientists heard about Dijkstra making this claim at one time in their careers. I guess I was wrong? Here is the context:

> A famous computer scientist, Edsger Dijkstra, did complain about interactive terminals, essentially favoring the disciplined approach required by punch cards and batch processing.

> While many programmers embraced the interactivity and immediate feedback of terminals, Dijkstra argued that the "trial and error" approach fostered by interactive systems led to sloppy thinking and poor program design. He believed that the batch processing environment, which necessitated careful, error-free coding before submission, instilled the discipline necessary for writing robust, well-thought-out code.

> "On the Cruelty of Really Teaching Computing Science" (EWD 1036) (1988 lecture/essay)

Seriously, the laments I hear now have been the same in my entire career as a computer scientist. Let's just look toward to 2035 where someone on HN will complain some old way of doing things is better than the new way because its harder and wearing hair shirts is good for building character.

Dijkstra did not make that claim in EWD1036. The general philosophy you're alluding to is described in EWD249, which – as it happens – does mention punchcards:

> The naive approach to this situation is that we must be able to modify an existing program […] The task is then viewed as one of text manipulation; as an aside we may recall that the need to do so has been used as an argument in favour of punched cards as against paper tape as an input medium for program texts. The actual modification of a program text, however, is a clerical matter, which can be dealt with in many different ways; my point is […]

He then goes on to describe what today we'd call "forking" or "conditional compilation" (in those days, there was little difference). "Using AI to get answers", indeed. At least you had the decency to use blockquote syntax, but it's tremendously impolite to copy-paste AI slop at people. If you're going to ingest it, do so in private, not in front of a public discussion forum.

The position you've attributed to Dijkstra is defensible – but it's not the same thing at all as punching the cards yourself. The modern-day equivalent would be running the full test suite only in CI, after you've opened a pull request: you're motivated to program in a fashion that ensures you won't break the tests, as opposed to just iterating until the tests are green (and woe betide there's a gap in the coverage), because it will be clear to your colleagues if you've just made changes willy-nilly and broken some unrelated part of the program and that's a little bit embarrassing.

I would recommend reading EWD1035 and EWD1036: actually reading them, not just getting the AI to summarise them. While you'll certainly disagree with parts, the fundamental point that E.W.Dijkstra was making in those essays is correct. You may also find EWD514 relevant – but if I linked every one of Dijkstra's essays that I find useful, we'd be here all day.

I'll leave you with a passage from EWD480, which broadly refutes your mischaracterisation of Dijkstra's opinion (and serves as a criticism of your general approach):

> This disastrous blending deserves a special warning, and it does not suffice to point out that there exists a point of view of programming in which punched cards are as irrelevant as the question whether you do your mathematics with a pencil or with a ballpoint. It deserves a special warning because, besides being disastrous, it is so respectable! […] And when someone has the temerity of pointing out to you that most of the knowledge you broadcast is at best of moderate relevance and rather volatile, and probably even confusing, you can shrug out your shoulders and say "It is the best there is, isn't it?" As if there were an excuse for acting like teaching a discipline, that, upon closer scrutiny, is discovered not to be there.... Yet I am afraid, that this form of teaching computing science is very common. How else can we explain the often voiced opinion that the half-life of a computing scientist is about five years? What else is this than saying that he has been taught trash and tripe?

The full text of much of the EWD series can be found at https://www.cs.utexas.edu/~EWD/.

Has the quality of software been improving all this time?
Absolutely. I missed the punch card days, but have been here for the rest, and software quality is way higher (overall) than it used to be.
The volume of software that we have produced with new tools has increased dramatically. The quality has remained at a level that the market can accept (and it doesn't want to bother paying for more quality for the cost of it).
Sure, people were writing terrible code 25 years ago

XML oriented programming and other stuff was "invented" back then

Unironically, yes.

Now get back to work.

This item has no comments currently.