On a side note, this lady is a fraud: https://www.youtube.com/watch?v=nJjPH3TQif0&themeRefresh=1
In no way am I credentialing her, lots of people can make astute observations about things they weren't trained in, but she both mastered sounding authoritative and at the same time, presenting things to go the most engagement possible.
This trap reminds me of the Perry Bible Fellowship comic "Catch Phrase" which has been removed for being too dark but can still be found with a search.
Wow, thank you. I rarely good a good cultural recommendation here, but PBF I didn't know about.
I raise you, Joan Cornellà
If you don't have that experience in this domain, you will spend approximately as much effort validating output as you would have creating it yourself, but the process is less demanding of your critical skills.
> you don't have that experience in this domain, you will spend approximately as much effort validating output as you would have creating it yourself,
Not true.
LLMs are amazing tutors. You have to use outside information, they test you, you test them, but they aren't pathologically wrong in the way that they are trying to do a Gaussian magic smoke psyop against you.
Even when you lack subject matter expertise about something, there are certain universal red flags that skeptics key in on. One of the biggest ones is: “There’s no such thing as a free lunch” and its corollary: “If it sounds too good to be true, it probably is.”
Since reasoning models came about I've been significantly more bullish on them purely because they are less bad. They are still not amazing but they are at a poiny where I feel like including them in my workflow isn't an impediment.
They can now reliably complete a subset of tasks without me needing to rewrite large chunks of it myself.
They are still pretty terrible at edge cases ( uncommon patterns / libraries etc ) but when on the beaten path they can actually pretty decently improve productivity. I still don't think 10x ( well today was the first time I felt a 10x improvement but I was moving frontend code from a custom framework to react, more tedium than anything else in that and the AI did a spectacular job ).
It’s been a common mantra - at least in my bubble of technologists - that a good majority of the software engineering skill set is knowing how to search well. Knowing when search is the right tool, how to format a query, how to peruse the results and find the useful ones, what results indicate a bad query you should adjust… these all sort of become second nature the longer you’ve been using Search, but I also have noticed them as an obvious difference between people that are tech-adept vs not.
LLMs seems to have a very similar usability pattern. They’re not always the right tool, and are crippled by bad prompting. Even with good prompting, you need to know how to notice good results vs bad, how to cherry-pick and refine the useful bits, and have a sense for when to start over with a fresh prompt. And none of this is really _hard_ - just like Search, none of us need to go take a course on prompting - IMO folks jusr need to engage with LLMs as a non-perfect tool they are learning how to wield.
The fact that we have to learn a tool doesn’t make it a bad one. The fact that a tool doesn’t always get it 100% on the first try doesn’t make it useless. I strip a lot of screws with my screwdriver, but I don’t blame the screwdriver.