Preferences

Really seems like propagating the current platform inconsistencies into the future. Stick with 128 always, performance be damned. Slow code is much preferable to subtlety broken because you switched the host OS.

Especially if you need 128-bit float precision. It's very well known and understood that quad float is much slower in most platforms, extremely slow in some. If you're using quad float, it's because you absolutely need need all 128 bits, so why even reduce it to 80 bits for "performance"? Seems like an irrelevant design choice. Programmer can choose between f128 vs f80 if performance is intractable in the target platform.

This item has no comments currently.