Physical memory - Intel added support for 57 bits (up from 48 bits) in 2019, and AMD in 2022. 48 bit pointers obviously address the vast majority of needs. 96 bit pointers would make the developers of GC'd languages and VMs very happy (lots of tag bits).
For floats presumably you'd match the native sizes to maintain alignment. An f48 with a 10 bit exponent and an f96 with a 15 or 17 bit exponent. I doubt the former has any downsides relative to an f32 and the latter we've already had the equivalent of since forever in the form of 80 bit extended precision floats with a 16 bit exponent.
Amusingly I'm just now realizing that the Intel 80 bit representation has a wider exponent than IEEE binary128.
I guess high end hardware that supports f128 would either be f144 or f192. The latter maintains alignment so presumably that would win out. Anyway pretty much no one supports f128 in hardware to begin with.
Instruction sets - 12 bits for small chips and 24 for large ones. RISC-V instructions encode better in 24bits if you use immediate data after the opcode instead of inside it.
Physical memory is topping out near 40bits of address space and some virtual address implementations don't even use 64 bits on modern systems.
Floating point is kinda iffy. 36 bits with more than 24bit mantissa would be good. not sure what would replace doubles.