Preferences

> The long double type varies dramatically across platforms: > [...] > What sets numpy_quaddtype apart is its dual-backend approach: > Long Double: This backend uses the native long double type, which can offer up to 80-bit precision on some systems allowing backwads compatibility with np.longdouble.

Why introduce a new alias that might be 128 bits but also 80 ? IMO the world should focus on well defined types (f8, f16, f32, f64, f80, f128), then maybe add aliases.


Maybe it depends on the application?

If you have written a linear system solver, you might prefer to express yourself in terms of single/double precision. The user is responsible for knowing if their matrix can be represented in single precision (whatever that means, it is their matrix and their hardware after all), how well conditioned it is, all that stuff. You might rather care that you are working in single precision, and that there exists a double precision (with, it is assumed, hardware support, because GPUs can’t hurt us if we pretend they don’t exist) to do iterative refinement in if you need to.

Really seems like propagating the current platform inconsistencies into the future. Stick with 128 always, performance be damned. Slow code is much preferable to subtlety broken because you switched the host OS.
Especially if you need 128-bit float precision. It's very well known and understood that quad float is much slower in most platforms, extremely slow in some. If you're using quad float, it's because you absolutely need need all 128 bits, so why even reduce it to 80 bits for "performance"? Seems like an irrelevant design choice. Programmer can choose between f128 vs f80 if performance is intractable in the target platform.

This item has no comments currently.