Preferences

Good completely new jokes are like novel ideas: really hard even for humans. I mean fuck, we have an entire profession dedicated just to making up and telling them, and even theirs don't land half the time.

Exactly. It feels like with LLMs as soon as we achieved the at-the-time astounding breakthrough "LLMs can generate coherent stories" with GPT-2, people have constantly been like "yeah? Well it can't do <this thing that is really hard even for competent humans>.".

That breakthrough was only 6 years ago!

https://openai.com/index/better-language-models/

> We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text...

That was big news. I guess this is because it's quite hard for the most people to distinguish the enormous difficulty gulf between "generate a coherent paragraph" and "create a novel funny joke".

Same thing we saw with game playing:

- It can play chess -> but not at a serious level

- It can beat most people -> but not grandmasters

- It can beat grandmasters -> but it can’t play go

…etc, etc

In a way I guess it’s good that there is always some reason the current version isn’t “really” impressive, as it drives innovation.

But as someone more interested in a holistic understanding of of the world than proving any particular point, it is frustrating to see the goalposts moved without even acknowledging how much work and progress were involved in meeting the goalposts at their previous location.

> it is frustrating to see the goalposts moved without even acknowledging how much work and progress were involved in meeting the goalposts at their previous location.

Half the HN front page for the past years has been nothing but acknowledging the progress of LLMs in sundry ways. I wish we actually stopped for a second. It’s all people seem to want to talk about anymore.

I should have been more clear. Let me rephrase as: among those who dismiss the latest innovations as nothing special because there is still further to go, it would be nice to acknowledgment when goalposts are moved.
Maybe the people raving about LLM progress are the same people holding them to those high standards?

I don’t see what’s inconsistent about it. “Due to this latest amazing algorithm, the robots keep scoring goals. What do we do? Let’s move them back a bit!” Seems like a normal way of thinking to me…

I see people fawn over technical progress every day. What are they supposed to do, stop updating their expectations and never expect any more progress?

It could of course be that there are people who “never give it up for the robots”. Or maybe they do, and they did, and they have so fully embraced the brave new world that they’re talking about what’s next.

I mean, when I sit in a train I don’t spend half the ride saying “oh my god this is incredible, big thanks to whoever invented the wheel. So smooth!”

Even though maybe I should :)

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal