Do stats often deal with distict probabilities below 10^-300? And do they need to store them all over, or just in a couple variables that could do something clever?
Yes and with very wide dynamic range, though you really try to avoid it using other tricks. A lot of methods involve something resembling optimization of a likelihood function, which is often [naively] in the form of a product of a lot of probabilities (potentially hundreds or more) or might also involve the ratio of two very small numbers. Starting out far away from the optimum those probabilities are often very small, and even close to the optimum they can still be unreasonably small while still having extremely wide dynamic range. Usually when you really can't avoid it there's only a few operations where increased precision helps, but again even then it's usually a bandaid in lieu of a better algorithm or trick to avoid explicitly computing such small values. Still, I've had a few cases where it helped a bit.