Pure decimal integer literals (like 86) are typed as "int" in C, rather than being typeless and triggering type inference. This is a pain when you accidentally write something like this:
uint64_t n = 1 << 32;
On modern desktop platforms, an int is 32 bits, so 1 << 32 is 0, not 2^32, even though a 64-bit integer is wide enough to support that.
Regardless, it's not relevant here, because if an integer and an unsigned integer of the same size are compared the integer is implicitly cast to unsigned integer, and 86 is fine for both signed and unsigned integers (so "MAX(npins, SYS_kbind)" is safe).
Pure decimal integer literals (like 86) are typed as "int" in C, rather than being typeless and triggering type inference. This is a pain when you accidentally write something like this:
On modern desktop platforms, an int is 32 bits, so 1 << 32 is 0, not 2^32, even though a 64-bit integer is wide enough to support that.Regardless, it's not relevant here, because if an integer and an unsigned integer of the same size are compared the integer is implicitly cast to unsigned integer, and 86 is fine for both signed and unsigned integers (so "MAX(npins, SYS_kbind)" is safe).