citro3d
                                
                                
                                
                                    citro3d copied to clipboard
                            
                            
                            
                        Remove C3D_DEPTHTYPE and Fix Related Things
Feature Request
What feature are you suggesting?
Overview:
Remove C3D_DEPTHTYPE and fix various things related to it.
Smaller Details:
- Completely remove the C3D_DEPTHTYPE type, and accept GPU_DEPTHBUF directly in relevant functions.
 - Replace C3D_DEPTHTYPE_OK with a function that checks against actual known valid depth buffer formats. A simple switch statement will do nicely. The existing macro can simply call this function.
 - Deprecate C3D_DEPTHTYPE_VAL.
 - Add an obvious way to allocate a render target without a depth buffer. If it makes sense to do so, the same could be done for the color buffer. Options include:
- Add an additional function parameter or a separate function. Maybe a 'config' struct like C3D_TexInitParams?
 - Add a GPU_DEPTHBUF-typed #define in Citro3D that will always fail C3D_DEPTHTYPE_OK or its replacement (not ideal, as it would be in a different file to GPU_DEPTHBUF's definition, and thus easy to miss)
 - Add a GPU_RB_DEPTH_NONE value to the GPU_DEPTHBUF enum (not ideal, as that enum covers hardware concepts, while this is a C3D-specific issue, and it could break code that doesn't use C3D)
 
 - Fix C3D_CalcDepthBufSize and C3D_CalcColorBufSize being able to read beyond their respective size arrays.
 
Nature of Request:
- Removal and Replacement of buggy/fragile features
 
Why would this feature removal be useful?
C3D_DEPTHTYPE and the functions/macros that use it can hide ugly compiler-dependent bugs, and are also rather unintuitive.
- C3D_DEPTHTYPE_OK returns true for all depth values >= 0, but only 0, 2, and 3 are known to be acceptable. This means that passing any positive value > 3 causes C3D to attempt to allocate a depth buffer, but it reads out-of-bounds on the 
depthFmtSizesarray. This can either cause the allocation to fail or for an allocaiton of an invalid size to silently succeed. The same applies to color buffers, though they have no such validity check. - C3D_DEPTHTYPE_OK relies on negative values to disable depth buffer allocation. This is technically acceptable due to the transparent enum, but in practice, it can hide compiler-dependent bugs if casting between C3D_DEPTHTYPE and GPU_DEPTHBUF. This is because GPU_DEPTHBUF values may be considered unsigned by the compiler, therefore making it impossible to convert one to a negative C3D_DEPTHTYPE, rendering it impossible to pass a GPU_DEPTHBUF to the function (from, say, a variable) that fails the check, and will cause the undefined allocations mentioned above. I personally ran across this bug; when compiling with my usual -O3 optimization level, everything worked properly, but dropping to -O0 assigned GPU_DEPTHBUF to the u8 type, underflowing -1 to 255 and causing an allocation failure. This occurred with the devkitpro/devkitarm:20240202 Docker image.
 - The definition of C3D_DEPTHTYPE is visually messy, and the type is silent by nature, so many will probably ignore it.
 - renderqueue.h does not visibly show any way to create a render target without a depth buffer, despite this being a supported feature. No named constant, specialty function, or comment exists for this purpose. This is unintuitive.