cglm icon indicating copy to clipboard operation
cglm copied to clipboard

use fixed size integers (`int32`) for ivec instead of `int`

Open recp opened this issue 2 years ago • 2 comments

I think we should replace int with int32 in ivec[ 2 | 3 | 4 ] typedefs in order to make the vector size clear. Because in C int must be least 32-bit but it may be 64 on some platforms, AFAIK, there is no restrictions

recp avatar Jan 07 '22 08:01 recp

Perhaps a macro should be used to switch between types. Looks like glm uses GLM_PRECISION_HIGHP_INT, GLM_PRECISION_MEDIUMP_INT and GLM_PRECISION_LOWP_INT to switch between using sint64, signed int and signed short. I don't know why in glm, only the high precision integer has a fixed width.

Chris-F5 avatar May 09 '22 08:05 Chris-F5

int is not guaranteed to be at least 32-bit: the C standard only requires that it be able to hold -32767 to 32767 for signed, and 0 to 65535 for unsigned, making it 16-bit at minimum.

Fixed-size integers are less portable, potentially slow, and rarely ever actually needed: they're only good for guaranteeing consistent unsigned overflow/underflow behaviour (which can be emulated with larger integers through a simple bitwise AND, so this is a moot point) and fread-ing or memcpy-ing raw data generated on one platform over a struct on a different platform (which isn't guaranteed to work anyway because of differences in endianness, negative number representation, and struct padding, making this a moot point as well).

Fixed-size integers such as int32_t are optional in C: implementations are not required to provide them. Using them specifically harms the portability of your code. Likewise, using an integer type that is smaller than the CPU's native word size may result in slower code due to type promotion and the related sign-extension/zero-extension logic.

If you want to make the intended numerical range of an integer clear without using more bits than necessary, then you should be using the int_fast32_t orint_least32_t type. The former uses a larger integer size if it results in faster code (e.g. by using the CPU's native word size instead of a slower, smaller size) while the latter uses the smallest available integer size that's capable of storing a 32-bit value, even if it results in slower code. Unlike int32_t, these types are guaranteed to exist in all C99 (and later) implementations, while also specifically offering a choice between saving RAM and maximising performance.

Clownacy avatar Oct 18 '22 09:10 Clownacy