With fixed point and at least 2 decimal places, 10.01 + 0.01 is always exactly equal to 10.02. But with FP you may end up with something like 10.0199999999, and then you have to be extra careful anywhere you convert that to a string that it doesn't get truncated to 10.01. That could be logging (not great but maybe not the end of the world if that goes wrong), or you could be generating an order message and then it is a real problem. And either way, you have to take care every time you do that, as opposed to solving the problem once at the source, in the way the value is represented.
> Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.
In the case of HFT, this would have to depend very greatly on the particulars. I know the systems I write are almost never limited by arithmetical operations, either FP or integer.
The other "metal model" issue is that associative operations in math. Adding a + (b + c) != (a + b) + c due to rounding. This is where fp-precise vs fp-fast comes in. Let's not talk about 80 bit registers (though that used to be another thing to think about).
if (ask - bid > 0.01) {
// etc
}
With floating point, I have to think about the following questions:
* What if the constant 0.01 is actually slightly greater than mathematical 0.01?
* What if the constant 0.01 is actually slightly less than mathematical 0.01?
* What if ask - bid is actually slightly greater than the mathematical result?
* What if ask - bid is actually slightly less than the mathematical result?With floating point, that seemingly obvious code is anything but. With fixed point, you have none of those problems.
Granted, this only works for things that are priced in specific denominations (typically hundredths, thousandths, or ten thousandths), which is most securities.
In this example, I’m talking about securities that are priced in whole cents. If you represent prices as floats, then it’s possible that the spread appears to be less (or greater) than 0.01 when it’s actually not, due to the inability of floats to exactly represent most real numbers.
If you need to be extremely fast (like fpga fast), you don't waste compute transforming their fixed point representation into floating.
May I ask why? (generally curious)
I guess I understood GGGGP's comment about using fixed point for interacting with currency to be about accounting. I'd expect floating point to be used for trading algorithms, but that's mostly statistics and I presume you'd switch back to fixed point before making trades etc.
It's the associativity law that it fails to uphold.
I've spent most of my career writing trading systems that have executed 100's of billions of dollars worth of trades, and have never had any floating point related bugs.
Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.