I would argue a machine that short-circuits the process of getting stuck in obtuse books is actually harmful long term...
Conversations like this are always well intentioned and friction truly is super useful to learning. But the ‘…’ in these conversations seems to always be implicating that we should inject friction.
There’s no need. I have peers who aren’t interested in learning at all. Adding friction to their process doesn’t force them to learn. Meanwhile adding friction to the process of my buddies who are avidly researching just sucks.
If your junior isn’t learning it likely has more to do with them just not being interested (which, hey, I get it) than some flaw in your process.
Start asking prospective hires what their favorite books are. It’s the easiest way to find folks who care.
So in principle Gen AI could accelerate learning with deliberate use, but it's hard for the instructor to guide that, especially for less motivated students
Please read it as: "who knows what you'll find if you take a stop by the library and just browse!"
It’s not as if today’s juniors won’t have their own hairy situations to struggle through, and I bet those struggles will be where they learn too. The problem space will present struggles enough: where’s the virtue in imposing them artificially?
Books often have the "scam trap" where highly-regarded/praised books are often only useful if you are already familiar with the topic.
For example: i fell for the scam of buying "Advanced Programming in the unix environment" and a lot of concept are only shown but not explained. Wasted money, really. It's one of those book i regret not pirating before buying, really.
At the end of the day, watching some youtube video and then referencing the OS-specific manpage is worth much more than reading that book.
I suspect the case to be the same for other "highly-praised" books as well.
AI, on the other hand...
</resurgent-childhood-trauma>
Two things:
1 - I agree with you. A good printed resource is incredibly valuable and should be perfectly valid in this day and age.
2 - many resources are not in print, e.g. API docs, so I'm not sure how books are supposed to help here.
Eventually we will have to somehow convince AI of new and better ways of doing things. It’ll be propaganda campaigns waged by humans to convince God to deploy new instructions to her children.
And this outcome will be obvious very quickly for most observers won't it? So, the magic will occur by pushing AI beyond another limit or just have people go back to specialize on what eventually will becoming boring and procedural until AI catches up
The arguments were similar, too: What will you do if Google goes down? What if Google gives the wrong answer? What if you become dependent on Google? Yet I'm willing to bet that everyone reading this uses search engines as a tool to find what they need quickly on a daily basis.
Microsoft docs are a really good example of this where just looking through the ToC on the left usually exposes me to some capability or feature of the tooling that 1) I was not previously aware of and 2) I was not explicitly searching for.
The point is that the path to a singular answer can often include discovery of unrelated insight along the way. When you only get the answer to what you are asking, you lose that process of organic discovery of the broader surface area of the tooling or platform you are operating in.
I would liken AI search/summaries to visiting only the well-known, touristy spots. Sure, you can get shuttled to that restaurant or that spot that everyone visits and posts on socials, but in traveling that way, you will miss all of the other amazing food, shops, and sights along the way that you might encounter by walking instead. Reading the docs is more like exploring the random nooks and crannies and finding experiences you weren't expecting and ultimately knowing more about the place you visited than if you had only visited the major tourist destinations.
As a senior-dev, I have generally a good idea of what to ask for because I have built many systems and learned many things along the way. A junior dev? They may not know what to ask for and therefore, may never discover those "detours" that would yield additional insights to tuck into the manifolds of their brains for future reference. For the junior dev, it's like the only trip they will experience is one where they just go to the well known tourist traps instead of exploring and discovering.
Of course no-one's stopping a junior from doing it the old way, but no-one's teaching them they can, either.
So if AI gets you iterating faster and testing your assumptions/hypothesis I would say that's a net win. If you're just begging it to solve the problem for you with different wording - then yeah you are reducing yourself to a shitty LLM proxy.
Maybe. The naturally curious will also typically be slower to arrive at a solution due to their curiosity and interest in making certain they have all the facts.
If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?
It's always possible to go slower (with diminishing benefits).
Or I think putting it in terms of benefits and risks/costs: I think it's fair to have "fast with shallow understanding" and "slower but deeper understanding" as different ends of some continuum.
I think what's preferable somewhat depends on context & attitude of "what's the cost of making a mistake?". If making a mistake is expensive, surely it's better to take an approach which has more comprehensive understanding. If mistakes are cheap, surely faster iteration time is better.
The impact of LLM tools? LLM tools increase the impact of both cases. It's quicker to build a comprehensive understanding by making use of LLM tools, similar to how stuff like autocompletion or high-level programming languages can speed up development.
Yes. And now you can ask the AI where the docs are.
The struggling is not the goal. And rest assured there are plenty of other things to struggle with.
There's a lot of good documentation where you learn more about the context of how or why something is done a certain way.
Also the difference between using it to find information versus delegating executive-function.
I'm afraid there will be a portion of workers who crutch heavily on "Now what do I do next, Robot Soulmate?"
Any task has “core difficulty” and “incidental difficulty”. Struggling with docs is incidental difficulty, it’s a tax on energy and focus.
Your argument is an argument against the use of Google or StackOverflow.
Complaining about docs is like complaining about why research article is not written like elementary school textbooks.
And yet I wouldn’t trust a single word coming out of the mouth of someone who couldn’t understand Hegel so they read an AI summary instead.
There is value in struggling through difficult things.
If you can just get to the answer immediately, what’s the value of the struggle?
Research isn’t time coding. So it’s not making the developer less familiar with the code base she’s responsible for. Which is the usual worry with AI.
If you read great books all the time, you will find yourself more skilled at identifying good versus bad writing.
Its no different now, just the level of effort required to get the code copy is lower.
Whenever I use AI I sit and read and understand every line before pushing. Its not hard. I learn more.
1995: struggling with docs and learning how and where to find the answers part of the learning process
2005: struggling with stackoverflow and learning how to find answers to questions that others have asked before quickly is part of the learning process
2015: using search to find answers is part of the learning process
2025: using AI to get answers is part of the learning process
...
To the extent that learning to punch your own punch cards was useful, it was because you needed to understand the kinds of failures that would occur if the punch cards weren't punched properly. However, this was never really a big part of programming, and often it was off-loaded to people other than the programmers.
In 1995, most of the struggling with the docs was because the docs were of poor quality. Some people did publish decent documentation, either in books or digitally. The Microsoft KB articles were helpfully available on CD-ROM, for those without an internet connection, and were quite easy to reference.
Stack Overflow did not exist in 2005, and it was very much born from an environment in which search engines were in use. You could swap your 2005 and 2015 entries, and it would be more accurate.
No comment on your 2025 entry.
I thought all computer scientists heard about Dijkstra making this claim at one time in their careers. I guess I was wrong? Here is the context:
> A famous computer scientist, Edsger Dijkstra, did complain about interactive terminals, essentially favoring the disciplined approach required by punch cards and batch processing.
> While many programmers embraced the interactivity and immediate feedback of terminals, Dijkstra argued that the "trial and error" approach fostered by interactive systems led to sloppy thinking and poor program design. He believed that the batch processing environment, which necessitated careful, error-free coding before submission, instilled the discipline necessary for writing robust, well-thought-out code.
> "On the Cruelty of Really Teaching Computing Science" (EWD 1036) (1988 lecture/essay)
Seriously, the laments I hear now have been the same in my entire career as a computer scientist. Let's just look toward to 2035 where someone on HN will complain some old way of doing things is better than the new way because its harder and wearing hair shirts is good for building character.
> The naive approach to this situation is that we must be able to modify an existing program […] The task is then viewed as one of text manipulation; as an aside we may recall that the need to do so has been used as an argument in favour of punched cards as against paper tape as an input medium for program texts. The actual modification of a program text, however, is a clerical matter, which can be dealt with in many different ways; my point is […]
He then goes on to describe what today we'd call "forking" or "conditional compilation" (in those days, there was little difference). "Using AI to get answers", indeed. At least you had the decency to use blockquote syntax, but it's tremendously impolite to copy-paste AI slop at people. If you're going to ingest it, do so in private, not in front of a public discussion forum.
The position you've attributed to Dijkstra is defensible – but it's not the same thing at all as punching the cards yourself. The modern-day equivalent would be running the full test suite only in CI, after you've opened a pull request: you're motivated to program in a fashion that ensures you won't break the tests, as opposed to just iterating until the tests are green (and woe betide there's a gap in the coverage), because it will be clear to your colleagues if you've just made changes willy-nilly and broken some unrelated part of the program and that's a little bit embarrassing.
I would recommend reading EWD1035 and EWD1036: actually reading them, not just getting the AI to summarise them. While you'll certainly disagree with parts, the fundamental point that E.W.Dijkstra was making in those essays is correct. You may also find EWD514 relevant – but if I linked every one of Dijkstra's essays that I find useful, we'd be here all day.
I'll leave you with a passage from EWD480, which broadly refutes your mischaracterisation of Dijkstra's opinion (and serves as a criticism of your general approach):
> This disastrous blending deserves a special warning, and it does not suffice to point out that there exists a point of view of programming in which punched cards are as irrelevant as the question whether you do your mathematics with a pencil or with a ballpoint. It deserves a special warning because, besides being disastrous, it is so respectable! […] And when someone has the temerity of pointing out to you that most of the knowledge you broadcast is at best of moderate relevance and rather volatile, and probably even confusing, you can shrug out your shoulders and say "It is the best there is, isn't it?" As if there were an excuse for acting like teaching a discipline, that, upon closer scrutiny, is discovered not to be there.... Yet I am afraid, that this form of teaching computing science is very common. How else can we explain the often voiced opinion that the half-life of a computing scientist is about five years? What else is this than saying that he has been taught trash and tripe?
The full text of much of the EWD series can be found at https://www.cs.utexas.edu/~EWD/.
XML oriented programming and other stuff was "invented" back then
Now get back to work.
I would argue a machine that short circuits the process of getting stuck in obtuse documentation is actually harmful long term...