imgui icon indicating copy to clipboard operation
imgui copied to clipboard

Use the C standard integer types instead of defining your own.

Open GavinNL opened this issue 8 months ago • 11 comments

Version/Branch of Dear ImGui:

Version 1.92, Branch: master (master/docking/etc.)

Back-ends:

imgui_impl_sdlrenderer2.cpp + imgui_impl_sdlrenderer2.cpp

Compiler, OS:

Gcc13/Linux

Full config/build information:

No response

Details:

Hello,

I don't know if this has been discussed before, I tried to do a search in the issues, but didn't find anything.

I noticed that you have typedefs for some of the integer types that ImGui uses. I don't think these are portable, as the types that you specified are not guaranteed to be the size that you are requesting.

typedef unsigned int       ImGuiID;// A unique ID used by widgets (typically the result of hashing a stack of string)
typedef signed char        ImS8;   // 8-bit signed integer
typedef unsigned char      ImU8;   // 8-bit unsigned integer
typedef signed short       ImS16;  // 16-bit signed integer
typedef unsigned short     ImU16;  // 16-bit unsigned integer
typedef signed int         ImS32;  // 32-bit signed integer == int
typedef unsigned int       ImU32;  // 32-bit unsigned integer (often used to store packed colors)
typedef signed   long long ImS64;  // 64-bit signed integer
typedef unsigned long long ImU64;  // 64-bit unsigned integer

The C header <stdint.h> provides cross platform/compiler sized integer types that are guaranteed to be of the specified size. Replacing the above code with the following works and is more portable over compilers/architectures.

#include <stdint.h>

typedef int32_t  ImGuiID;// A unique ID used by widgets (typically the result of hashing a stack of string)
typedef int8_t   ImS8;   // 8-bit signed integer
typedef uint8_t  ImU8;   // 8-bit unsigned integer
typedef int16_t  ImS16;  // 16-bit signed integer
typedef uint16_t ImU16;  // 16-bit unsigned integer
typedef int32_t  ImS32;  // 32-bit signed integer == int
typedef uint32_t ImU32;  // 32-bit unsigned integer (often used to store packed colors)
typedef int64_t  ImS64;  // 64-bit signed integer
typedef uint64_t ImU64;  // 64-bit unsigned integer

I noticed this when I was doing some template work and had an issue with the 64-bit integers, it was failing my static_asserts. I noticed the following issue:

// fails on Linux X64/GCC-13
static_assert( std::is_same_v<ImU64, uint64_t>, "bad");
static_assert( std::is_same_v<ImS64, int64_t>, "bad");

This may not be that big of an issues, but it would be good practice to use the standard integer types rather than defining your own based on int/long/signed/unsigned/etc.

Replacing the typedefs with the c standard integer types does not cause any issues compiling the imgui_demo, but I am unsure if anyone else would have issues with this change.

Screenshots/Video:

No response

Minimal, Complete and Verifiable Example code:

No response

GavinNL avatar Apr 02 '25 14:04 GavinNL

Hello,

Thanks for your suggestion.

Generally I don't really care about vague concept of "good practices", but I'm very interested in any actual real-world problems arising from making that change vs keeping things as they have been for a long time.

I don't think these are portable, as the types that you specified are not guaranteed to be the size that you are requesting.

AFAIK I would tend to believe they are portable and exactly the size we expect, unless proven they aren't.

The C header <stdint.h> provides cross platform/compiler sized integer types that are guaranteed to be of the specified size.

In theory yes. In practice it needs to be checked with all the esoteric compilers and architectures and SDK we support. I suppose that in practice the easiest way to check may be to try and see who might complain.

But see e.g. https://github.com/ocornut/imgui_club/pull/56 Where we actually changed leftovers int32_t to our types in that code (I just asked the person for details).

ocornut avatar Apr 02 '25 15:04 ocornut

There's a table on here C Data Types page that indicates how large each type is. One thing to note is that singed int is at least 16 bits and not at least 32 bits. So it's possible there might be an architecture that has 16-bit integers.

I haven't had many issues when working on desktop software. But I have come across problems in embedded systems (not ImGui related) where int were 16 bits and that caused some problems when I didn't realise it and assumed they'd be 32 bits. I have always used the c-standard integer types after running into that.

This issue is not crucial, so feel free to close if it if you like.

GavinNL avatar Apr 02 '25 17:04 GavinNL

Generally I don't really care about vague concept of "good practices"

I'm also not a huge fan of such things, but IMO this is actually one of the more reasonable ones.

To me it's similar to adding technically-unnecessary parenthesis to clarify intended order of operations. Sure it's not necessary, but it's easier to just not have to think about it in the first place.

Like @GavinNL said, embedded compilers still love taking advantage of C's uselessly vague type definitions even in the modern day. Desktop compilers have been sensibly-behaved for ages now, but Omar I'm actually surprised you haven't been personally burned by this in the past 😅

That being said, I would worry slightly about this change having unintended consequences, particularly in the form of warnings in consuming code.

PathogenDavid avatar Apr 02 '25 19:04 PathogenDavid

One of my worry is how that warning for using %d to print a ImU32 would be triggered for any external codebase using ImU32 + print style statements. That's likely not overly common but I'm fairly content with being able to use %d instead of %ld everywhere.

but Omar I'm actually surprised you haven't been personally burned by this in the past 😅

I suppose in MS-DOS era I was, but I'm assuming today's arch having 16-bit ints are not likely to run dear imgui anyhow?

ocornut avatar Apr 04 '25 13:04 ocornut

Fair point. The C-style print statements are probably where the biggest issues will arise.

What if you had a preprocessor definition to override the default imgui types with the standard types? That way the default behaviour is unchanged, and anyone who wants the change can enable it in their build.

#ifdef IMGUI_USE_C_STANDARD_INTEGER_TYPES
#include <stdint.h>
...
typedef int64_t  ImS64;  // 64-bit signed integer
typedef uint64_t ImU64;  // 64-bit unsigned integer
#else 
...
typedef signed   long long ImS64;  // 64-bit signed integer
typedef unsigned long long ImU64;  // 64-bit unsigned integer
#endif 

GavinNL avatar Apr 04 '25 13:04 GavinNL

Then third party code/libraries would have to take on an additional responsibility if they use those types.

What actual benefit would you get from switching to the other types?

ocornut avatar Apr 04 '25 15:04 ocornut

Where I noticed the issue was when writing some template code:

it constexpr ( std::is_same_v<T, uint64_t> )
{
}

This failed when T was ImU64. Now that I know the problem, I can work around it. But it was a behaviour that I didn't expect

GavinNL avatar Apr 04 '25 15:04 GavinNL

I think I accidentally marked this as completed. You can close/ignore this issue.

GavinNL avatar Apr 04 '25 15:04 GavinNL

I'll reopen if it was accidentally closed. But my question is: if you understand the type being used and can work around it, what benefit would you get from it using the other type? Is the workaround particularly tricky or problematic?

ocornut avatar Apr 04 '25 15:04 ocornut

Its easy to work around if you know that ImU64 will not be the same as a uint64_t.

I replaced my code with the following

if constexpr( std::is_unsigned_v<T> && sizeof(T)==sizeof(uint64_t))
{
}

My reasoning for opening the ticket was to try to get a bit closer to conventions. I can understand your point about the "%d" potentially causing issues for other users, so I'm fine with not having this change.

GavinNL avatar Apr 04 '25 15:04 GavinNL

I'll reopen if it was accidentally closed. But my question is: if you understand the type being used and can work around it, what benefit would you get from it using the other type? Is the workaround particularly tricky or problematic?

I think a major point to using the standard types would be to reduce the amount of cognitive effort the end user needs to put in. I'm not a C nor C++ developer whatsoever, but I have a profound appreciation for languages like C# where all libraries that want to use something like an integer or single-precision number must use the int and float language keywords respectively.

It tends to make reasoning about the code much easier when you have standardized types that you can expect to see everywhere. IMO, macros/type aliasing make things really messy and hard to understand. I once found a library that type-aliased float away twice and I had to dig for like 30 minutes to figure out what they actually meant.

BlueCyro avatar Apr 15 '25 04:04 BlueCyro

  1. There are platforms without stdint header.
  2. ImGui as far as I know is C++ not C library, so usage of C header “stdint.h” makes no sense at all.
  3. Let the end user decide which types to use. If you have problems on your platform with int being 64 bits instead of 32 - just change typedef.
  4. One thing that could be done there better is to use some defines or templates to let user customize typedefs used by ImGui. Something like:
#ifdef IMGUI_PLATFORM_INT32_TYPE
typedef IMGUI_PLATFORM_INT32_TYPE ImS32;
#else
typedef int ImS32;
#endif

or with less macro and more templates

namespace ImGui{
    // defaults
    template <int BitsSize> PlatformInt {
        typedef int ImS32;
        typedef short ImS16;
        ….
    };
}
….
typedef ImGui::PlatformInt<32>::ImS32 ImS32;
// user code with some 64 bit platform 
namespace ImGui{
    template <> PlatformInt<32> {
        typedef int32_t ImS32;
        typedef uint32_t ImU32;
    };
}

oktonion avatar Jun 23 '25 07:06 oktonion