We know, because we taught computers how to do both. The first long multiplication algorithm was written for the Colossus about 10 minutes after they got it working.
The first computer algebra system that could manage variable substitution had to wait for Lisp to be invented 10 years later.
https://www.sigcis.org/files/Haigh%20-%20Colossus%20and%20th...
The limitation seems to have been physical rather than logical.
Tools allow traversal of poorly understood, but recognized, subskills in a way that will make one effective in their job. An understanding of the entire stack of knowledge for every skill needed is an academic requirement born out of a lack of real world employment experience. For example, I don't need to know how LLMs work to use them effectively in my job or hobby.
We should stop spending so much time teaching kids crap that will ONLY satisfy tests and teachers but has a much reduced usefulness once they leave school.
I never need to "fall back" to the principles of multiplication. Multiplying by the 1s column, then the 10s, then the 100s feels more like a mental math trick (like the digits of multiples of 9 adding to 9) than a real foundational concept.
Oxford and Cambridge have a "tutorial" system that is a lot closer to what I would choose in an ideal world. You write an essay at home, over the course of a week, but then you have to read it to your professor, one on one, and they interrupt you as you go, asking clarifying questions, giving suggestions, etc. (This at least is how it worked for history tutorials when I was a visiting student at an Oxford college back in 2004-5 - not sure if it's still like that). It was by far the best education I ever had because you could get realtime expert feedback on your writing in an iterative process. And it is basically AI proof, because the moment they start getting quizzed on their thinking behind a sentence or claim in an essay, anyone who used ChatGPT to write it for them will be outed.
If they are trade schools, yes teach React and Node using LLMs (or whatever the enabling tools of the day are) and get on with it.
And the library, and inter-library loan (in my case), and talking to a professor with a draft...
And it did teach and evaluate skills I’ve used me entire career.
Because I was demonstrating that I understood the material intrinsically, not just knew how to use tools to answer it.
Making them open book + AI would just mean you need “larger” questions to be as effective a test, so you’re adding work for the graders for basically no reason.
Thank god we still teach quadratic equations, complex numbers, hyperbolic trig functions, and geometric constructions though. I don't know what would become of the world if most people didn't understand those things when we set them loose in the world.
For that, the student must have internalized certain concepts, ideas, connections. This is what has to be tested in a connectivity-free environment.
Faking intelligence with AI only works in an online-exclusive modality, and there’s a lot of real world circumstances where being able to speak, reason, and interpret on the fly without resorting to a handheld teleprompter is necessary if you want to be viewed positively. I think a lot of people are going to be enraged when they discover that dependency on AI is unattractive once AI is universally accessible. “But I benefited from that advantage! How dare they hold that against me!”
Challenge accepted. One possible solution: https://github.com/RonSijm/ButtFish
I get the same "you won't always have a calculator with you" vibes from 90s teachers chiding you to show your work when I hear people say stuff like this.
It's more likely I will not have paper and writing implements than not having a calculator.
Besides, most people have room for fast arithmetic or integrals; fast arithmetic would be more useful, but I'm not putting the time in to get it back.
Plus all about capability to actually retain whatever you ask from the model...
Why not open book + AI use exams, because that's what students will have in their careers?