But yes we currently allow students to use AI provided their solution works and they can explain it. We just discourage to use AI to generate the full solution to each problem.
If you let people use AI they are still accountable for the code written under their name. If they can’t look at the code and explain what it’s doing, that’s not demonstrating understanding.
Exactly the same as in professional environments: you can use LLMs for your code but you've got to stand behind whatever you submit. You can of course use something like cursor and let it go free, not understanding a thing of the result, or you can step-by-step do changes with AI and try to understand the why.
I believe if teachers relaxed their emotions a bit and adapted their grading system (while also increasing the expected learning outcomes), we would see students who are trained to understand the pitfalls of LLMs and how to maximise getting the most out of them.
1. Get students to work on a more complex than usual project (in relation to their previous peers). Let them use whatever they want and let them know that AI is fine.
2. Make them come in for a physical exam where they have questions about they why of decisions they had to take during the project.
And that's it? I believe that if you can a) produce a fully working project meeting all functional requirements, and b) argue about its design with expertise, you pass. Do it with AI or not.
Are we interested in supporting people who can design something and create it or just have students who must follow the whims of professors who are unhappy that their studies looked different?