Is it appropriate to use floating point to store the window size?
As the title said, is there any situation that need float window size? If not, i think we should use integer to store window size.
I have always used unsigned int to store window size. There are some graphics/system APIs which take float as input, but AFAIK they only use the integer part.
In the end, it still needs to be converted to a int pixels, which feels unnecessary.
I have always used unsigned int to store window size. There are some graphics/system APIs which take float as input, but AFAIK they only use the integer part.
Yep, unsigned int is better. As negative value is meaningless.