I have a simple front-end test that I give to junior devs. Every few months I see if ChatGPT can pass it. It hasn’t. It can’t. It isn’t even close.
It answers questions confidently but with subtle inaccuracies. The code that it produces is the same kind of non-sense that you get from recent bootcamp devs who’ve “mastered” the 50 technologies on their eight page résumé.
If it’s gotten better, I haven’t noticed.
Self-driving trucks were going to upend the trucking industry in ten years, ten years ago. The press around LLMs is identical. It’s neat but how long are these things going to do the equivalent of revving to 100 mph before slamming into a wall every time you ask them to turn left?
I’d rather use AI to connect constellations of dots that no human possibly could, have an expect verify the results, and go from there. I have no idea when we’re going to be able to “gpt install <prompt>” to get a new CLI tool or app, but, it’s not going to be soon.
It answers questions confidently but with subtle inaccuracies. The code that it produces is the same kind of non-sense that you get from recent bootcamp devs who’ve “mastered” the 50 technologies on their eight page résumé.
If it’s gotten better, I haven’t noticed.
Self-driving trucks were going to upend the trucking industry in ten years, ten years ago. The press around LLMs is identical. It’s neat but how long are these things going to do the equivalent of revving to 100 mph before slamming into a wall every time you ask them to turn left?
I’d rather use AI to connect constellations of dots that no human possibly could, have an expect verify the results, and go from there. I have no idea when we’re going to be able to “gpt install <prompt>” to get a new CLI tool or app, but, it’s not going to be soon.