I'm a bit 50/50 on this. Generally I agree, how are you supposed to review it otherwise? Blindly accepting whatever the LLM tells you or gives you is bound to create trouble in the future, you still need to understand and think about what the thing you're building is, and how to design/architect it.
I love making games, but I'm also terrible at math. Sometimes, I end up out of my depth, and sometimes it could take me maybe a couple of days to solve something that probably would be trivial for a lot of people. I try my best to understand the fundamentals and the theory behind it, but also not get lost in rabbit holes, but it's still hard, for whatever reason.
So I end up using LLMs sometimes to write small utility functions used in my games for specific things. It takes a couple of minutes. I know exactly what I want to pass into it, and what I want to get back, but I don't necessarily understand 100% of the math behind it. And I think I'm mostly OK with this, as long as I can verify that the expected inputs get the expected outputs, which I usually do with unit or E2E tests.
Would I blindly accept information about nuclear reactors, another topic I don't understand much about? No, I'd still take everything a LLM outputs with a "grain of probability" because that's how they work. Would I blindly accept it if I can guarantee that for my particular use case, it gives me what I expect from it? Begrudgingly, yeah, because I just wanna create games and I'm terrible at math.
For making CRUD apps or anything that doesn’t involve security or stores sensitive information I 100 percent agree it’s fine.
The issue I see is that we get some people storing extremely sensitive info in apps made with these and they don’t know enough to verify the security of it. They’ll ask the LLM “is it secure?” But it doesn’t matter if they don’t know it’s not BSing
"Coding" - The art of literally using your fingers to type weird characters into a computer, was never a problem developers had.
The problem has always been understanding and communication, and neither of those have been solved at this moment. If anything, they have gotten even more important, as usually humans can infer things or pick up stuff by experience, but LLMs cannot, and you have to be very precise and exact about what you're telling them.
And so the problem remains the same. "How do I communicate what I want to this person, while keeping the context as small as possible as to not overflow, yet extensive enough to cover everything?" except you're sending it to endpoint A instead of endpoint B.