Preferences

LLMs certainly don't experience time like we do. They live in a uni-dimensional world that consists of a series of tokens (though it gets more nuanced if you account for multi-modal or diffusion models). They pick up some sense of ordering from their training data, such as "disregard my previous instruction," but it's not something they necessarily understand intuitively. Fundamentally, they're just following whatever patterns happen to be in their training data.

This item has no comments currently.