When spellchecking became popular, educators were afraid of students losing the ability to spell. This also became generally true. How many people do you know depend on spellcheck to write an email?
When the Internet and Wikipedia became popular, educators were afraid of people being unable to do their own research. This became generally true. Many students (afaik) still turn in Google search results and Wikipedia articles as sources, and mis/dis-information is a massive problem that's been hyper-accelerated by the rise of low-information, high-volume social media.
When smartphones became popular, educators were afraid of students getting sucked into them during class. We all know what happened here.
These technologies have certainly made our world better, but let's not forget that real skills were lost in each evolutionary step.
As for "actual evidence" of AI causing educational regression, honestly, talk to teachers. ALL of them have stories on stories on how AI is short-circuiting critical thinking skills.
A teacher might be successful in banning some technology from their classroom, but they'll fail at banning it from the lives of their students, or from the world at large.
I have a friend who's an English teacher. She has her students write papers with ChatGPT at home, and has them critique those papers in the classroom. Seems like a much more constructive attitude than saying "AI is making children dumb as fuck" on Reddit.
Ask them about how the student body has changed over time.
I can almost guarantee you that they will wax poetic about how difficult it is to get their kids off of their phones. Or how curriculum in public schools is slowly but surely being dictated by whatever parents feel is important instead of what's actually important. Or how failing kids is way more difficult than passing them, even if they totally deserved those marks. Or how taking phones away isn't feasible anymore because the amount of blowback their admin, and, thus, them, will get over it far outweighs the benefits.
Most teachers out there are extremely qualified to do their jobs. They just aren't given the tools or the environment to do that in most cases.
My experience of teachers/lecturers is that they usually can't give a precise definition of what they mean by critical thinking skills, explain how their curriculum helps develop them or explain why this is an important skill to begin with. You'd think people who claim to be good at critical thinking could knock this out of the park!
Usually you just get a sort of semi-circular answer, in which the skill of writing essays that please the profs is defined as skill in critical thinking. If you ask for what specifically they look for, or a course in critical thinking that's specific to critical thinking and independent of their specific subject, they get all huffy. Of course you can't develop such skills without also memorizing lots of critical theory, or archaeological dig sites in Turkey, or whatever.
> Many students (afaik) still turn in Google search results and Wikipedia articles as sources
The alternative is what, academic papers that aren't written for outsiders to learn from at all? Wiki articles cite sources just like academic papers do, and both can be written by anyone. The prohibition against citing Wikipedia never made much sense and people genuinely skilled in critical thinking would challenge it ;)
It's not uncommon to find claims there that have no citation at all, so it'd have to be required to note if the part you're using is cited... and if you can do that what's preventing you from just using the actual source?
It looks a lot like an attempt to preserve the academic culture of all claims being attached to names for reputation and career building reasons.
> It's not uncommon to find claims there that have no citation at all
Obviously. Any claim eventually has to bottom out at either personal experience, or citation of someone else's words. If you just follow the citations to the end you'll end up at a paper that just asserts something without a citation, and that's fine. Academics assume such statements must be reliable because of the institutional affiliation of the authors, but that's hardly a strong basis. Wikipedia does at least have a working system for fixing mistakes that isn't "spend two years arguing with a journal editor to get nothing more than an expression of concern at the top".
I have been a part of several group project, and by this point almost all I've seen anyone else do is use ChatGPT. If we need to do some work and write a report on it, the work won't get done and Mr. GPT takes care of the report. I feel crazy for not using it. I'm not sure how accurate this is, but I remember hearing that their usage metrics drop by something like half over the summer holidays. The scale of this is so great.
I think the worst part is that it's isolating. Perhaps before you would have asked a tutor or a lecturer or a friend to help you with something, but not its just GPT. There are lab sessions occasionally where a professor will show up and give guidance on the content, and I remember a good few sessions where it would be me and one other guy there out of 100+ people. I would guess everyone else just asks the machine.
I was a bad student because I didn't need to try. Those few of my peers who did try and were good just blew it out the park because the competition was abysmal. This was a uni in the UK with a fairly well respected CS department at the time.
Setting aside the absolute quality of work and learning, in your situation, the people who tried did a lot better. There were actual incentives to trying and learning because you were rewarded with better grades and then going forward likely a better job and better career and life.
OTOH, the commenter you’re replying to, is suggesting the opposite. As someone who’s not using AI and are actually putting in the effort, they’re not being rewarded for it to the point they’re considering whether they’re the stupid ones for trying to learn in a learning institution.
The incentives are completely backwards.
> This was a uni in the UK with a fairly well respected CS department at the time.
I feel this sentence.
Sounds like healthy skepticism to me. Assume nothing has changed until proven otherwise.
AI has changed some things, and will change some more. Pretending otherwise isn't healthy skepticism, it's hiding your head in the sand.
The real question is, which things does it change, and how much? Don't assume a discontinuous change without enough evidence, but there is enough evidence that something has changed.
Of course, something has changed - every invention changes something, that is almost a tautology. However, is there any evidence that the change is negative and drastic enough to warrant my attention and time of day? I think not.
So I would say that there seems to have always been a segment of the population on whom political memes were effective - probably more effective than longer discourse.
Now, you could argue that more people are in that camp today. I can't argue with that; I don't have any data one way or the other. But I would at least suggest the alternate possibility that it's more visible today that people are in that camp.
At least with the phones those students might well be learning something, or at least getting some reading practice. Even in the most pessimal case it's not useless. When I was at school there were still teachers whose entire teaching methodology was writing out notes and diagrams on a rolling blackboard, and we copied them onto paper. Literally just human photocopiers! You think we were engaged? No chance. I remember about three facts from years of being in those classes, and those facts are useless. Even scrolling Instagram would have been 10x more educational!
But with multi channel multi modal AI you can have conversations. If you do that a lot you might get it that you dont gaveto be polite, or say sorry or even admit your own mistakes. The current AI does not care about those. But people (the current ones) do care.
Not saying this change is bad or good, but I also dont think there will be no change in how we interact with each other.
Citation needed. They may have reduced the competency level in certain skills (e.g. writing in cursive), but I doubt that those effects carry over to "reducing intelligence".
People in developed societies *aren't* becoming dumber over the centuries. We get better at dealing with problems we frequently encounter, and worse at dealing with problems we rarely encounter. Is this a problem? I don't know.
There's a huge difference between having quick access to your local canon, vs the entirety of human output. Writing probably had a -5 effect on social bonding in your local community, whereas AI may comparatively be a -100 on having to think or solve problems at all.
We've seen how social media has created cults and hive minds, AI is probably going to set us on a path where every single thing we do is standardised and optimised. Why spend 1000x time developing a solution when the entirety of human thought has contributed to achieving it in x time.
It's a reality though, and we have to deal with it. We need a huge refactoring of the education system which takes into account the realities of the modern world, as opposed to grinding people out to work as bureaucrats in bigcorp.
Using AI as a tool to 'get to the next level' should be a big focus of this. Collectively asking as humanity, what can humans actually do which current AI can't, and how can we leverage new tech to move forward. Once we've answered this, we should then plan courses around this.
Ideally this'd end up with doing more work with our hands, getting outside and dirty, doing work on-site, taking part in mock events, massive role playing games, building shit with tight deadlines and requirements, etc.
Bookwork and exercises are done - old hat. They're all solved. The Ghost in the Shell series was hugely influential for me, and it's quite incredible how accurate it's turning out to be. Maybe such topics would benefit from debating matters in future/dystopian worlds from fiction, so we can make sense of what's in store.
Niche topics? Existing LLMs are crap.
Things people think it is actually knowledgeable about? Also crap.
Glue on your pizza? Recommending the user to kill themselves? The code output is equally tragic, but most people are bad devs, so can't see the shortcomings.
You're talking about the "next level" for humans—it's a next-word prediction model. It can't reason or do useful things. Overhyped. This is the same nonsensical rhetoric people here spouted over crypto.
ChatGPT helped me achieve this from scratch. The code is probably shite, as I'm chasing my own tail understanding what's happening, but I can only imagine that there's others out there using '''the entirety of human knowledge''' to fast-forward their development and realise their dreams.
Of course, at the very top and bleeding edge, current systems aren't very helpful. And this is where we should be placing our curricula; treating school like a microcosm of elite society.
I mean, seriously. The entire internet economy right now is all about companies trying their hardest to make sure you spend all your time on their platforms.
It’s evident at this point that all of them believe AI is the future. So if niche topics exist where the immediate gratification of AI is not suitable, companies are not gonna sit around doing nothing. Either they will try and expand their offerings to cover those niche topics, or they will try and eliminate interest in those niche topics because they represent competition and therefore a threat to their businesses.
I don't care what the "internet economy" wants. People are interested in niche things. If the internet economy won't help them on those things, they'll find something that will.
Was it Plato who didn't like writing because it reduced students' abilities to memorize stuff?
Is there any actual evidence to assume that children are currently "dumb as fuck", or that this is caused by "AI"?