Preferences

To put some specifics on "a lot of very small numbers", it's around a million numbers with part-per-billion precision, or a billion numbers with part-per-million precision, where you approach the limits of naive summation in double precision.

For sums, yes, but products and division by small numbers are the real killer and those come up a lot in stats. Hence why you try to avoid them by working with log-probabilities (where products become sums), but sometimes you can't. Quad precision is a bit of a bandaid that just pushes the issue off when you don't have a better algorithm, but it works sometimes.
How so? All primitive operations are backward stable, and unlike addition and subtraction, division and multiplication are well-conditioned.
It's more about resolution. Products of small numbers get small a lot faster than sums. Likewise for dividing two small numbers. The smallest possible quad precision number is a lot smaller than the smallest possible double precision one.
But the main feature of floating point is that you keep the same relative precision at all sizes. As long as you're not hitting infinity or denormals, multiplying doesn't lose information like adding a big number to a small number does.

Do stats often deal with distict probabilities below 10^-300? And do they need to store them all over, or just in a couple variables that could do something clever?

> Do stats often deal with distict probabilities below 10^-300?

Yes and with very wide dynamic range, though you really try to avoid it using other tricks. A lot of methods involve something resembling optimization of a likelihood function, which is often [naively] in the form of a product of a lot of probabilities (potentially hundreds or more) or might also involve the ratio of two very small numbers. Starting out far away from the optimum those probabilities are often very small, and even close to the optimum they can still be unreasonably small while still having extremely wide dynamic range. Usually when you really can't avoid it there's only a few operations where increased precision helps, but again even then it's usually a bandaid in lieu of a better algorithm or trick to avoid explicitly computing such small values. Still, I've had a few cases where it helped a bit.

This item has no comments currently.