Preferences


smcleod
I'd argue that a huge portion of the working population have for a long time exhibited an inability to think - probably in part due to the meaningless, widget churning work they're tasked with - I don't think that'll change, however being empowered to learn and try things beyond your individual grasp by leveraging AI as for example a coding parter can unlock whole new ways of thinking in domains they'd otherwise not have the time to investigate.

See also https://smcleod.net/2025/03/the-democratisation-paradox-what...

Watch out horses! The motorised carriage is going to do away with your ability to gallop!

add-sub-mul-div
Nothing convinces me of an impending thought crisis than defenses of AI centering around comparisons to calculators or cars.

The average person didn't take freedom from manual long division to unlock whole new ways of mastering math, they stopped doing math at the grocery store and became docile enough to allow the rise of modern consumer economics, Uber, Doordash, Klarna.

And I don't know what invoking the obsolescence of horses is supposed to achieve. Yes, there are fewer horses for roles today. No, the remaining horse population hasn't been uplifted to whole new ways of thinking about labor and transportation. (I'm probably leaving myself open to a counter that today there are many horses living lives of leisure as pets.)

You're not going to find a sufficient historical analogue to general thought being the skill we no longer have to practice, you have to engage in new thought about the qualitative difference of this situation.

Terr_
> comparisons to calculators

Yeah, the current LLM dilemma isn't really analogous. Offloading a task like multiplying two numbers is (A) safely reliable and (B) isn't delegating planning or decision-making.

> The average person [...] stopped doing math at the grocery store and became docile enough to allow the rise of modern consumer economics, Uber, Doordash, Klarna.

Disagree on this one: Wasteful spending by consumers (and vendors encouraging it) is a very old problem, much older than calculators or any withering of daily algebra skills. If anything, people have better/faster capabilities to see the "big picture" of how much a service is ripping them off than ever before.

I think a better explanation would be stuff like the psychological distance created when using digital payments (as opposed to cash or handwritten checks) and companies getting better tools for advertising/brain-hacking.

bicepjai
>> Watch out horses! The motorised carriage is going to do away with your ability to gallop! Love the analogy. Gets your point across without much description needed
commandlinefan
More than just that, thinking looks just like "slacking" to people whose entire existence is dedicated to rooting out "slackers". Thinking can't be planned down to the hour months ahead of time in an Excel spreadsheet the way MBA's are taught that all worthwhile work should be.
joenot443
> Another professor notes that AI papers are replete with “seemingly logical statements that are actually full of emptiness.” A depressing thought is that students are incapable of discerning such intellectual vapor because their heads are empty.

This is depressing, indeed. I think this type of empty-headedness has been growing for a while and isn't just a result of AI, I think it's a result of people generally not having a well-tuned mental barometer for what makes for strong writing.

I think if you didn't read much as a young person, the sort of grammatically-sound and calmly-smug prose that GPT produces probably passes as "good" writing because it has all of the characteristics you remember that good writing must possess. If I may..

> Summarizing paragraphs must begin with strong statements. References can be made to previous points, perhaps acknowledge weaknesses, but the main structure remains the same. Our writing is confident, familiar, and satisfied - just like writing should be.

Unfortunately, I think this is similar to someone growing used to "good" meals from Cheesecake Factory and allowing that to become their reference point for fine dining. All the pieces are there, nothing about it is distinctly "wrong", but something feels off.

I don't pretend to have a solution.

mistrial9
your reaction to this information is also a choice! instead of deriding the lack of effort or insight by the masses, instead a choice is to value and highlight real value in intellectual work.. it is hard work! (human) editors of value ought to be paid ! industrialization of intellectual work has consequences, agree
nottorp
> Another professor notes that AI papers are replete with “seemingly logical statements that are actually full of emptiness.”

In my country we were calling this "wooden language" back when were under a communist dictatorship behind the iron curtain.

Lots of words that are designed to avoid any responsability for anything.

Now we're automating this.

thunkshift1
> Lots of words that are designed to avoid any responsability for anything

Also known as manager speak or corporate lingo

BeFlatXIII
That's why AI is so useful in a corporate setting. Internally, those are communist dictatorships, complete with faking results to look good and wealth accumulating in the hands of those who are more equal than the rest.
nottorp
Mmm dictatorships. Not necessarily communist.

Pretty sure the fascists were the same, I just have no direct experience with them.

Corporate "monthly celebrations" fall in the same category :)

andrekandre

  > Corporate "monthly celebrations" fall in the same category :)
are those the ones that are optional but with the implicit understanding that you will be dinged if you exercise that option?
nottorp
China did not invent the social credit :)
ironically they are literally capitalist dictatorships, since the idea that owning a company should give you dictatorial control of everyone's work is held up as an ideal by capitalists
pgryko
What happens when AGI is able to think better than us? Will governments close schools and universities as its no longer economically useful?
lm28469
School, and education at large, isn't about teaching you what's economically useful. It's about building a society sharing a common base of values/morals, history, patriotism, as well as the basics of science, maths, &c. You want people who are able to think for themselves to iter and create new things which are in the continuity of your country's history.

The vast majority of everything you learn before university isn't that useful if all you care about is creating robots for your economy.

> What happens when AGI ...

What happens IF AGI ...

eszed
I wish it were about that. I understand that some countries' school systems have those goals, and that many succeed at them°, but I don't think my country (US) has ever had enough of "a common base of values/morals, history, and patriotism" for its schools to work that way. Maybe regionally it has at times, but public schools - always to their detriment - have been at the forefront of every "culture war" since their inception.

On the other hand, US schools' form (as distinct from the content they teach) - the schedules, the bells, the desks, the disciplinary expectations - has not been the subject of much debate or wide controversy, but it does encode and enculcate a particular value system. It was adapted from the Prussian model, and specifically intended to create docile industrial workers. That may have been a good idea at the time.

Now, of course, that docile industrial workers are not so economically important, that model doesn't make much sense. The public school system as a whole (I'm painting in broad strokes, and know myself of honorable exceptions) has shifted its purpose to (it's a spectrum, based on prevailing socio-economic conditions, and the individual kids in question) baby-sitting or incarceration.

I wish I weren't so cynical, but I've had too broad an experience with too large a cross-section of too many US public schools not to be.

---

°I've seen the insides of a lot, though not so many, UK schools, and my limited impression is that their system is in a better place than the US, but bears the same flaws. (And that a special hell should await the members of the 2010 Conservative government who promulgated school consolidations.)

Fifteen years ago I taught ESL to a lot of German kids (from both University and non-U tracks), and was highly impressed with them, and with what they told me about their educational system. That impression is old, though, and I don't know how things might have changed in the meantime.

I have a good friend with a six-year old going to school in France. I like what she tells me about their system.

hshdhdhj4444
An additional question is what’s the point of humans when we’ve created a species that is superior than us at pretty much everything?
paradox242
It really makes you wonder what the motivations are behind anyone who would want to drive society down this road just to see what's at the end of it when it's already pretty clear that it's nothing good for the majority of us.
quonn
A bunch of matrix multiplications is not a species.
NetRunnerSu
of course is become it.
pier25
Even if we reach AGI I doubt it's going to be economically feasible to run it for the average user.

Probably only for big corps and govts.

hshdhdhj4444
I’ve never understood the energy complaint. It’s a legitimate complaint for right now.

But humans are able to exercise a ton of thinking with remarkably low energy levels a day. And the calories of energy humans consume are also used primarily for physical mechanisms like movement, breathing, etc. so the energy requirements for pure thought is even less.

Why should improved AGIs in the future consume any more energy? Besides, AGIs across the world can communicate with each other. So once one AI computes something theoretically it should never need to do it again. It can tap into the answer saved by some different AI at some point in the recent past further reducing energy needs.

I really don’t see why AI energy consumption should be too high.

quonn
How do you get the idea that we have any of that technology? Brains are nothing like computers and we have almost no technology that is like biology. There is no reason to be sure that we can just scale down the existing tech to arrive at something as efficient. Maybe we can. But if not then developing a system that‘s more like chemistry could take 50 years or 100 or more.
pier25
Assuming it's possible, it might take decades or even centuries to get close to the human brain in terms of efficiency. Sounds like a problem even harder than fusion.
paradox242
If we achieve true AGI we enter into a state where there is more value in withholding the technology than by even selling it to the highest bidder. It would conceivably enable a winner-takes-all scenario of the highest order.
viralsink
It would be unethical not to have an AI life coach since your own decisions would be not optimal more often than not.
sebmellen
Yeah, but what are you optimizing for? Optimal only means something if it’s in context.
Nasrudith
There is a fundamental philosophical error in that assumption. Goals are not set by sheer rationality but emotionally. Logic is a guide to the means to the end and to give wisdom about how well pursuing a given goal may work but it is ultimately the emotional which sets the drive.

An AI life coach cannot guide you for if you would prefer having a large family, being part of a DINK couple, or remain single for the rest of your life. It can tell you the odds of success of living off of a band are low and to prepare backup plans,and that being a chartered accountant is a more reliable source of income but it cannot tell you what the right decision is to attempt.

brador
The human decision maker still wins if you’re optimising your life for novelty.
zingababba
Or for spitefulness. Imagine having all of your decisions made by equations, sounds like hell. There would be a significant amount of people that do the exact opposite out of willfulness and spite.
viralsink
I guess there would be more communes that chose to live like the Amish. A new hippie movement.
Nasrudith
Schools and universities were supposed to have non-economic purposes including theoretically governance. An old trope that was used to justify monarchies and nobility was the ignorance of the masses. That meme wound up dying out both from the middle classes and from the rise of universal* suffrage demonstrating that it had a huge role in ensuring stability as it gave an outlet for pressure other than violent uprisings. In the US at least that was part of the reason for public education. The other part being that they didn't want to leave the educational infrastructure in the hands of the Catholic school system for fear that it would leave them controlled by 'papists'.

* Occasionally the term effectively meaning only 'all men regardless of social class'. Perhaps related to the purpose of disincentivizing violent uprisings.

One lesson of history is that you cannot simply leave advocacy for self-interest in another's hands, as regardless of how much 'better' they may know, they do not know you and your priorities better than yourself. That was essentially already attempted in feudal past essentially. You need at very least the ability to choose your advocate. This makes education a necessity for a self-governing people. There was also another trope around the Great Depression claiming that democracy was 'obsolete' from industrialization and its top-down organization vs cottage industries. Lets not repeat history in falling for that again.

But even if we abandon such lofty principles there would still be a reason for education even if AGI does it better. Having people capable of fighting and/or maintaining the swarms of would still be essential for even cynical reasons of military power and monopoly on violence. It wouldn't just be using humans as canon fodder - most of the military manpower is logistical and the tail has only been growing longer in the tooth-to-tail ratio thanks to technological advancement. That technology is so dominant should highlight its power, considering just how effective raw numbers are in military science.

johnea
AI's Second Biggest Threat: Old People Who Can't Think
eszed
Going by my older relatives, that has already been achieved.
paradox242
They "tempt" us into cognitive offloading? That is just about the entire value proposition.
dmonitor
Yes, that is how temptations tend to work. The implication is that the cognitive offloading is bad for you in the long term.
dyauspitr
Oh I definitely find my brain resisting the process of writing a new document from scratch. It’s just so much easier to have ChatGPT spit out the initial version and then refine it from there.

I’m still definitely providing value with no decrease in the quality of my output so I’m counting on my past knowledge + ChatGPT to get me through my career but I weep for the next generation.

This item has no comments currently.