Preferences

I imagine teachers said similar things when word processors became a thing. And calculators. And spellchecking. And the internet. And wikipedia. And smartphones.

Was it Plato who didn't like writing because it reduced students' abilities to memorize stuff?

Is there any actual evidence to assume that children are currently "dumb as fuck", or that this is caused by "AI"?


When word processors became popular, educators lamented over the loss of handwriting skills. This became generally true. Many people have terrible handwriting, since, as it happens, if you don't practice something, you lose the ability to do that thing over time.

When spellchecking became popular, educators were afraid of students losing the ability to spell. This also became generally true. How many people do you know depend on spellcheck to write an email?

When the Internet and Wikipedia became popular, educators were afraid of people being unable to do their own research. This became generally true. Many students (afaik) still turn in Google search results and Wikipedia articles as sources, and mis/dis-information is a massive problem that's been hyper-accelerated by the rise of low-information, high-volume social media.

When smartphones became popular, educators were afraid of students getting sucked into them during class. We all know what happened here.

These technologies have certainly made our world better, but let's not forget that real skills were lost in each evolutionary step.

As for "actual evidence" of AI causing educational regression, honestly, talk to teachers. ALL of them have stories on stories on how AI is short-circuiting critical thinking skills.

Is this a problem of teachers lacking teaching skills? We're not getting rid of word processing, spellchecking, phones, calculators, etc. It's up to the teacher (or rather the collective of educators as a whole) to come up with ways of incorporating these tools or working around the challenges they present.

A teacher might be successful in banning some technology from their classroom, but they'll fail at banning it from the lives of their students, or from the world at large.

I have a friend who's an English teacher. She has her students write papers with ChatGPT at home, and has them critique those papers in the classroom. Seems like a much more constructive attitude than saying "AI is making children dumb as fuck" on Reddit.

That's great that your friend is doing this.

Ask them about how the student body has changed over time.

I can almost guarantee you that they will wax poetic about how difficult it is to get their kids off of their phones. Or how curriculum in public schools is slowly but surely being dictated by whatever parents feel is important instead of what's actually important. Or how failing kids is way more difficult than passing them, even if they totally deserved those marks. Or how taking phones away isn't feasible anymore because the amount of blowback their admin, and, thus, them, will get over it far outweighs the benefits.

Most teachers out there are extremely qualified to do their jobs. They just aren't given the tools or the environment to do that in most cases.

ALL of them have stories on stories on how AI is short-circuiting critical thinking skills.

My experience of teachers/lecturers is that they usually can't give a precise definition of what they mean by critical thinking skills, explain how their curriculum helps develop them or explain why this is an important skill to begin with. You'd think people who claim to be good at critical thinking could knock this out of the park!

Usually you just get a sort of semi-circular answer, in which the skill of writing essays that please the profs is defined as skill in critical thinking. If you ask for what specifically they look for, or a course in critical thinking that's specific to critical thinking and independent of their specific subject, they get all huffy. Of course you can't develop such skills without also memorizing lots of critical theory, or archaeological dig sites in Turkey, or whatever.

> Many students (afaik) still turn in Google search results and Wikipedia articles as sources

The alternative is what, academic papers that aren't written for outsiders to learn from at all? Wiki articles cite sources just like academic papers do, and both can be written by anyone. The prohibition against citing Wikipedia never made much sense and people genuinely skilled in critical thinking would challenge it ;)

I can't understand why anyone would argue for being allowed to cite Wikipedia.

It's not uncommon to find claims there that have no citation at all, so it'd have to be required to note if the part you're using is cited... and if you can do that what's preventing you from just using the actual source?

Why wouldn't you be able to cite Wikipedia, as long as you specify the date on which you read it? The assumption that it's unreliable doesn't seem to be rooted in any genuine data, especially as I remember teachers and lecturers asserting that immediately the moment it appeared, based on nothing more than "but anyone can edit it!". One might think that was a bit early to leap to conclusions about quality.

It looks a lot like an attempt to preserve the academic culture of all claims being attached to names for reputation and career building reasons.

> It's not uncommon to find claims there that have no citation at all

Obviously. Any claim eventually has to bottom out at either personal experience, or citation of someone else's words. If you just follow the citations to the end you'll end up at a paper that just asserts something without a citation, and that's fine. Academics assume such statements must be reliable because of the institutional affiliation of the authors, but that's hardly a strong basis. Wikipedia does at least have a working system for fixing mistakes that isn't "spend two years arguing with a journal editor to get nothing more than an expression of concern at the top".

There's various levels of rigour. For casual conversation any source is usually enough, even Wikipedia or a Reddit comment. And sometimes anything short of independently verifying is not enough. The question is, where on this spectrum do elementary/high school essays fall?
That's a different question to what I was replying to, but if they do require citations then it seems like a poor idea to teach anyone to rely on this slight shortcut.
Anecdotally, no one on my computer science course is doing the work. You could say AI is a learning tool generously, and it does help some people, but it seems to have replaced a lot of their actual abilities to write stuff. Have you really learned something if they only way you can go about it is passing questions to ChatGPT? At that point the AI is doing the work and you are just a data entry person. I should add that this approach doesn't scale well. For instance, there are apps that let you take a picture of an equation and get a worked solution. In the short term, their ability to solve equations seems to have improved, but I don't think they'll learn as much doing it that way.

I have been a part of several group project, and by this point almost all I've seen anyone else do is use ChatGPT. If we need to do some work and write a report on it, the work won't get done and Mr. GPT takes care of the report. I feel crazy for not using it. I'm not sure how accurate this is, but I remember hearing that their usage metrics drop by something like half over the summer holidays. The scale of this is so great.

I think the worst part is that it's isolating. Perhaps before you would have asked a tutor or a lecturer or a friend to help you with something, but not its just GPT. There are lab sessions occasionally where a professor will show up and give guidance on the content, and I remember a good few sessions where it would be me and one other guy there out of 100+ people. I would guess everyone else just asks the machine.

Just as an opposing anecdote, when I was doing CS at uni the group project, which was randomly allocated, was just as bad and this was 20 years ago. Only one other person in my group could coherently program and he was shit. So I did all the code. It was isolating in that situation too. Gpt might not be a factor, it's just the flavour of the moment.

I was a bad student because I didn't need to try. Those few of my peers who did try and were good just blew it out the park because the competition was abysmal. This was a uni in the UK with a fairly well respected CS department at the time.

Your anecdote actually makes the AI future look worse and your experience much better.

Setting aside the absolute quality of work and learning, in your situation, the people who tried did a lot better. There were actual incentives to trying and learning because you were rewarded with better grades and then going forward likely a better job and better career and life.

OTOH, the commenter you’re replying to, is suggesting the opposite. As someone who’s not using AI and are actually putting in the effort, they’re not being rewarded for it to the point they’re considering whether they’re the stupid ones for trying to learn in a learning institution.

The incentives are completely backwards.

Now imagine that those other people decide to contribute by "writing" the report that you're going to get graded on, with no understanding of code and not doing it themselves. Imagine that some people who are semi-competent at programming decide to contribute likewise. It takes them from do-nothing layabouts to active hindrances. Much worse than someone not knowing anything is someone pretending to know something, and I thank my lucky stars that none of them tried to write the code with AI.

> This was a uni in the UK with a fairly well respected CS department at the time.

I feel this sentence.

So-called "professors" are just slow to adapt. University is a huge waste of money currently. ChatGPT is a blessing.
What's the opposite of the slippery slope fallacy, where we convince ourselves that there isn't a slope at all? I agree that there has been doomsayers with every technological advance, but surely there can be a point where a piece of technology truly does remove the need for people to think to a problematic degree.
> What's the opposite of the slippery slope fallacy, where we convince ourselves that there isn't a slope at all?

Sounds like healthy skepticism to me. Assume nothing has changed until proven otherwise.

Assume things do not represent a discontinuous change until proven otherwise.

AI has changed some things, and will change some more. Pretending otherwise isn't healthy skepticism, it's hiding your head in the sand.

The real question is, which things does it change, and how much? Don't assume a discontinuous change without enough evidence, but there is enough evidence that something has changed.

> there is enough evidence that something has changed

Of course, something has changed - every invention changes something, that is almost a tautology. However, is there any evidence that the change is negative and drastic enough to warrant my attention and time of day? I think not.

My mother and I were just discussing how dumb people have gotten in just the past few years. It’s like they don’t think any deeper than a meme - no nuance is considered. You easily see it in the political realm. The only thing we can come up with is social media being the cause. I remember seeing a video (on Reddit) of a teacher teaching his class, and 100% of the students were looking at phones instead of listening. Say what you want about the quality of the teacher, students, etc, but that level of disengagement didn’t happen before social media.
At least in the political realm, memes have worked for a very long time - at least more than a century, maybe even millennia. An example: "Ma, ma, where's my pa? Gone to the White House. Ha ha ha." Another: "Peace. Bread. Land."

So I would say that there seems to have always been a segment of the population on whom political memes were effective - probably more effective than longer discourse.

Now, you could argue that more people are in that camp today. I can't argue with that; I don't have any data one way or the other. But I would at least suggest the alternate possibility that it's more visible today that people are in that camp.

I went to school before smartphones. Back then we used to stare out of the window, throw paper aeroplanes, whisper talk to friends or - mostly - just sit there being insanely bored and checked out. There was never a time when teachers all held their students in rapt attention.

At least with the phones those students might well be learning something, or at least getting some reading practice. Even in the most pessimal case it's not useless. When I was at school there were still teachers whose entire teaching methodology was writing out notes and diagrams on a rolling blackboard, and we copied them onto paper. Literally just human photocopiers! You think we were engaged? No chance. I remember about three facts from years of being in those classes, and those facts are useless. Even scrolling Instagram would have been 10x more educational!

I understand the sentiment, but I have a hard time seeing any educational value is 99% of anything I’ve ever seen on tiktok. There definitely were bad teaches, and I assume still are. But the window and paper airplanes weren’t tuned to suck you in for hours and hold your attention. I still think things are legitimately worse today.
Then forbid those specific apps, not the device itself.
If technology has truly removed a need to think in a certain capacity, then surely it has just expanded our skills enough that we don't need to be so personally skilled in that capacity. We still teach kids basic math, although I do nearly every daily arithmetic with a calculator these days. The dream of technology is that we will be able to let go of skills that we used to see as essential. That's success, not failure.
I think I agree that AI will be a net benefit, but in world where it is difficult to have a meaningful conversation with a human because that human is so used to talking to a chatbot or someone doesn't have the skills to research a problem they're facing because they've always had a chatbot to ask ... I would argue that something of value has been lost.
Being unable to have a meaningful conversation sounds like general technological skepticism to me. I don't see any reason to think talking to chatbots will erode our conversation skills, anymore than the internet or TV or books has in the past. People interact in the real world much in the same way they did 100 years ago. As for having the skills to research a problem, is chatGPT much different than the adoption of search engines? Many adults probably couldn't find information in a library very easily these days, but if they have no need for this skill anyway, I don't see that as a loss.
I dont think your analogy is true internet or TV. You dont talk with the TV so when watching you use other set of skills. The same with internet.

But with multi channel multi modal AI you can have conversations. If you do that a lot you might get it that you dont gaveto be polite, or say sorry or even admit your own mistakes. The current AI does not care about those. But people (the current ones) do care.

Not saying this change is bad or good, but I also dont think there will be no change in how we interact with each other.

It’s not quite a fallacy name, but I’ve heard that called boiling a frog: https://en.m.wikipedia.org/wiki/Boiling_frog
Those things arguably did reduce intelligence. It's frustrating how often people present this point (on varying topics) in a completely standalone fashion. I.e. they say "yeah yeah, people had X complaint at Y historical time too", implying those historical people were wrong because...? They were in the past? The world hasn't ended? Technological progress continues? Usually none of that is evidence to the contrary. In this case, none of that implies that people in developed societies aren't becoming dumber over the centuries. I think people sometimes make the assumption that knowing more is a synonym for intelligence.
> Those things arguably did reduce intelligence.

Citation needed. They may have reduced the competency level in certain skills (e.g. writing in cursive), but I doubt that those effects carry over to "reducing intelligence".

People in developed societies *aren't* becoming dumber over the centuries. We get better at dealing with problems we frequently encounter, and worse at dealing with problems we rarely encounter. Is this a problem? I don't know.

Think it'll be hard to find solid evidence at this point, tbh.

There's a huge difference between having quick access to your local canon, vs the entirety of human output. Writing probably had a -5 effect on social bonding in your local community, whereas AI may comparatively be a -100 on having to think or solve problems at all.

We've seen how social media has created cults and hive minds, AI is probably going to set us on a path where every single thing we do is standardised and optimised. Why spend 1000x time developing a solution when the entirety of human thought has contributed to achieving it in x time.

It's a reality though, and we have to deal with it. We need a huge refactoring of the education system which takes into account the realities of the modern world, as opposed to grinding people out to work as bureaucrats in bigcorp.

Using AI as a tool to 'get to the next level' should be a big focus of this. Collectively asking as humanity, what can humans actually do which current AI can't, and how can we leverage new tech to move forward. Once we've answered this, we should then plan courses around this.

Ideally this'd end up with doing more work with our hands, getting outside and dirty, doing work on-site, taking part in mock events, massive role playing games, building shit with tight deadlines and requirements, etc.

Bookwork and exercises are done - old hat. They're all solved. The Ghost in the Shell series was hugely influential for me, and it's quite incredible how accurate it's turning out to be. Maybe such topics would benefit from debating matters in future/dystopian worlds from fiction, so we can make sense of what's in store.

"the entirety of human output" is doing a lot of heavy-lifting.

Niche topics? Existing LLMs are crap.

Things people think it is actually knowledgeable about? Also crap.

Glue on your pizza? Recommending the user to kill themselves? The code output is equally tragic, but most people are bad devs, so can't see the shortcomings.

You're talking about the "next level" for humans—it's a next-word prediction model. It can't reason or do useful things. Overhyped. This is the same nonsensical rhetoric people here spouted over crypto.

As a teacher with no programming and compsci background I've developed an entire interactive site which currently uses 50k database records; mapping grammar structures to questions to classical art to interactive quizzes to videos to collocations, etc, etc. It's effectively a Wikipedia of the English language with everything arranged into interactive lessons. 449 Svelte/js files in so far.

ChatGPT helped me achieve this from scratch. The code is probably shite, as I'm chasing my own tail understanding what's happening, but I can only imagine that there's others out there using '''the entirety of human knowledge''' to fast-forward their development and realise their dreams.

Of course, at the very top and bleeding edge, current systems aren't very helpful. And this is where we should be placing our curricula; treating school like a microcosm of elite society.

Why should “niche topics” even exist anymore?

I mean, seriously. The entire internet economy right now is all about companies trying their hardest to make sure you spend all your time on their platforms.

It’s evident at this point that all of them believe AI is the future. So if niche topics exist where the immediate gratification of AI is not suitable, companies are not gonna sit around doing nothing. Either they will try and expand their offerings to cover those niche topics, or they will try and eliminate interest in those niche topics because they represent competition and therefore a threat to their businesses.

> Why should “niche topics” even exist anymore?

I don't care what the "internet economy" wants. People are interested in niche things. If the internet economy won't help them on those things, they'll find something that will.

The calculator example is a good one. Prohibited or repressed for a couple of decades at best because it would "make people unable to do math", as if people wanted to do math by hand.
When word processors became a thing in the 1980s, my teachers loved it. They didn't have to read everyone's scrawly handwriting.
My experience was a bit different. We were forced to use fountain pens, and if your handwriting wasn't good enough, you'd just be punished until it improved. Needless to say, several decades later my handwriting is still awful.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal