I find it easier to reason about reservoir sampling in an alternative formulation: the article talks about flipping a random (biased) coin for each arrival. Instead we can re-interpret reservoir sampling as assigning a random priority to each item, and then keeping the items with the top k priority.
It's fairly easy to see in this reformulation whether specific combinations of algorithms would compose: you only need to think about whether they would still select the top k items by priority.
But depending on what you need, you might also just do (random priority + weight * category) or so. Or you just keep separate reservoirs for high importance items and for everything else.
In the first instance, every second we get a 'truly fair' random sample from all the messages in that second.
Going from there to eg a 'truly fair' random sample from all the messages in a minute is not trivial. And it's not even possible just from the samples, without auxiliary information.