Preferences

Thanks for your reply.

It's exactly like lexers for compilers. This parsing strategy coupled with the decision to then map the results into an embedding space of arbitrary dimensionality is why these models don't work and cannot be said to understand language. They cannot reliably handle fundamental aspects of meaning. They aren't equipped for it.

They're pretty good at coming up with well-formed sentences of English, though. They ought to be given the excessive amounts of data they've seen.


This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal