Preferences

Adding knowledge works, depending on how to define knowledge and works; given sufficient data you can teach an LLM new things [1].

However, the frontier models keep improving at a quick enough rate that it's often more effective just to wait for the general solution to catch up with your task then to spend months training a model yourself. Unless you need a particular tightly controlled behavior or need a smaller faster model or what have you. Training new knowledge in can get weird [2].

And in-context learning takes literal seconds-to-minutes of time if your information fits in the context window, so it's a lot faster to go that route if you can.

[1] https://arxiv.org/abs/2404.00213

[2] https://openreview.net/forum?id=NGKQoaqLpo


This item has no comments currently.