Preferences

As a senior frontend/javascript guy, I’m afraid that relying on ChatGPT/copilot for _current best practices_ is probably where it works the worst.

Oftentimes it will produce code that’s outdated. Or, it will output code that seems great, unless you have an advanced understanding of the browser APIs and behaviors or you thoroughly test it and realize it doesn’t work as you hoped.

But it’s pretty good at getting a jumpstart on things. Refining down to best practices is where the engineer comes in, which is what makes it so dicey in the hands of a jr dev.


> I’m afraid that relying on ChatGPT/copilot for _current best practices_ is probably where it works the worst

This matches my experience. When ChatGPT started going viral, I started getting a lot of PRs from juniors who where trying it out. Pretty much every single one was using depreciated API calls or best practices from 5-10 years ago. I'd ask why they chose to use an API that is scheduled to be removed in the next release of whatever library or system we are using.

ChatGPT does have it's place. But you need to understand the tools you're using. It can't be great for a first spike or just getting something working. But then you have to go and look at what it's doing and make sure you understand it.

So it's basically StackOverflow except you get answers instantly and without having to deal with "closed as duplicate" nonsense.
Yes.

Although it’s also hollowed out the group of people using StackOverflow (and perhaps stack overflow has restricted open access to its data for further scraping), so future iterations of LLMs will have less up to date training data to use.

StackOverflow's making their own competing LLM for all this stuff.

IMO, one of the biggest problems with the way people use LLMs right now, is that they're being treated as a single oracle: to know Java, it must be trained on examples of Java.

It would be much better if their natural language comprehension abilities were kept separated from their knowledge (and there are development efforts in this direction), so in this example it would be trained to be able to be able to read a Java tutorial rather than by actually reading a Java tutorial, so when the overall system is asked to write something in Java, the language model within the system decides to do this by opening https://learnxinyminutes.com and combining the user query with the webpage.

I think this will help make the models more compact, which is a benefit all by itself, but it would also mean that knowledge can be updated much more easily.

Someone would have to actually do this in order to see if those benefits are worth the extra cost of having to load a potentially huge a tutorial into the context window, and likewise the extent to which a more compact training set makes the language comprehension worse.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal