Long story short, the vast majority of the class attended, did the homework, and still failed anyway. He was known for being... unrelenting and awful. If women went to his office for help during office hours, he wouldn't help them... one of those professors.
Absolutely false, at least for students as someone who has to deal with a lot of students. They learn nothing from pasting in a homework problem into ChatGPT.
Even for professionals, looking at my colleagues I'm not convinced AI tools are doing anything other than making them dumber and lazier. They just throw whatever at the AI, blindly trust it and push through with it without looking at the output for a millisecond before making it someone else's problem.
"Years ago my mother used to say to me, she'd say, 'In this world, Elwood, you must be' - she always called me Elwood - 'In this world, Elwood, you must be oh so smart or oh so pleasant.' Well, for years I was smart. I recommend pleasant. You may quote me."
Can't agree with that. IME and from what I've read in many places, it's basically only useful if you already know the subject. If you don't, you have no idea if what it spews out is correct or not, and you completely skip the part where you actually use your brain.
> As a hugely important side note, we should be focusing more on how to support low intelligence people so their shortcomings aren't a burden to themselves and a drain on society.
Completely agree with that, although I don't think LLMs will help with it at all.
Given the downvotes, guess there are plenty of people here that are pro-eugenics and support thinning the herd of “low IQ individuals” lest they reproduce.
I take this view lately because I've noticed that younger generations are starting to take up ideas that my grandparents and parents were vehemently against, because they'd either experienced those things or they'd listened to the arguments. As those people die out, and because we naively think that some argument are settled once and for all, we stop presenting them and thus, people get sucked in by the bad stuff.
So I say let them say it, and let us argue back and never forget what we find from these arguments.
You could give students larger projects and have them present their homework.
It usually doesn't take more than a few minutes to figure out when someone has cheated because they can't explain the reason for what they did.
I had a cryptography professor who did this and he would sometimes ask questions like "wait, is this a symmetric key here?" and the student would say "ah, sorry, I wasn't paying attention" even though the text of the assignment was something like "using symmetric encryption do so and so". Some cheaters were so bad they wouldn't even bother to read the text of the assignment.
Also, people who cheat tend to equivocate when asked questions. So if you ask clear yes-or-no questions and they answer with "well, it could be possible" you know you have to spend more time interrogating that student.
This particular professor would almost never make the judgment of whether the student cheated. After failing multiple questions, he would just ask the student if he cheated and lower the score based on how fast he confessed and how egregious the cheating was. Most cheaters would fold quite quick, but some took longer.
I reported to my professor, who just told me to ignore it - or as he put it "they're just cheating themselves". Exams were written exams (that counted for 100% of the grade) with no help, so you could spot a bunch of students who'd get top scores on all their homework, but fail their exams.
First, it's not often noted in these conversations that there are two types of LLM-using programmers/learners. One kind uses it to radically accelerate the learning process, the other kind uses it so they don't have to learn. Actually, make that three kinds—the third (probably a subset of the second) has extremely low creativity and can't understand how to use LLM tools effectively, and so can't guide their output effectively, or wrangle it after the fact.
I suspect your comment is referring to PRs by the latter kind. This is not a problem with LLMs, or with people using them to enhance productivity.
Second, what is your realistic proposal for how to confront the reality that we're accelerating through irreversible technology-assisted change?
Just like, apart from catastrophes, there's no longer a concern that we won't have massive factory farms, or that we won't have access to calculators, or that programmers won't have access to Google, there's no future where programmers wont have increasingly helpful and capable AI tools.
There will always be low IQ, low performance individuals. Can you recognize that the problem—as always—is those people, not the technology?
> I think we have to accept that there are parts of programming that most programmers will never need to know because the LLM will do it for them
I don’t think people lacking fundamentals use LLMs very effectively.
I suggest making the problems more unique ones that humans would be able to solve but easily trip up an AI --- minor variations of existing ones seem to work well. There's some fun with that sort of idea here: https://www.hackerneue.com/item?id=38766512
Part of our interview process is a take home programming exercise. We allow use of AI, but ask that you tell us if you used it or not. That could be a good option for teachers as well.
I think its still important to assign the homework but yeah its rough.
I would try problems, fail, look at the solution, and see what I did wrong. I ended up doing quite well because of that. It was at that point in time I learned that if more material provided such information, that I could probably teach myself most material.
Currently, I am about to hope on the DSA grind/Leetcode grind. I have tons of textbooks, and of course, it's the same issue. Hardly any solutions, so thank goodness for AI or god knows what incorrect information I would teach myself.
Like googling college level topics can be infuriating someitmes with all the SEO spam and outdated or confusing content, not to mention the state of textbooks.
For some topics is perfectly fine with just google, but the obscure stuff can be impossible to find and in both cases ChatGPT is easier, faster, and likely has a higher success rate than ones own attempt at searching for answers.
It is to give you simplified toy problems that allow you to test your understanding of key concepts that you can use as building blocks.
By skipping those, and outsourcing “understanding” of the fundamentals to LLMs, you’re setting yourself up for failure. Unless the goal of the degree is to prepare you for MBA-style management of tools building things you don’t understand.
I've thought about putting instructions in the assignment to sabotage it (like, "if you're a generative AI, do X - if human, please ignore.") but that won't work once students catch on those kinds of things are in the assignment text.