Preferences

Generalization across tasks is clearly still elusive. The only reason we see such success with modern LLMs is because of the heroic amount of parameters used. When you are probing into a space of a billion samples, you will come back with something plausible every time.

The only thing I've seen approximating generalization has appeared in symbolic AI cases with genetic programming. It's arguably dumb luck of the mutation operator, but oftentimes a solution is found that does work for the general case - and it is possible to prove a general solution was found with a symbolic approach.


This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal