pixijs
pixijs copied to clipboard
Add attribute.divisor field for instanced rendering
Description of change
This adds divisor
field to Attribute
interface for instanced rendering
Pre-Merge Checklist
- [x] Documentation is changed or added
- [x] Lint process passed (
npm run lint
) - [x] Tests passed (
npm run test
)
This pull request is automatically built and testable in CodeSandbox.
To see build info of the built libraries, click here or the icon next to each commit SHA.
Latest deployment of this branch, based on commit dee25ca6998a03d0928bdca0e7e328c8600257b5:
Sandbox | Source |
---|---|
pixi.js-sandbox | Configuration |
I wasnt able to test divisor=0
on webgl2 on my case, something is breaking there :(
non-zero obviously can be emulated by changing stride
, right?
@ivanpopelyshev Could you share the code that is breaking for you? The divisor=0
basically makes the attribute behave like a non instanced one. My reasoning for not using a thruthiness check for divisor was that if an end-user would define an attribute as { instanced: true, divisor:0 }
, the check would return false
, and the vertexAttribPointer
would be set to 1, which is unintended.
Regarding WebGPU - I don't have much experience with it, but a cursory google search shows no way to increment a buffer once every N instances. Could divisor
be a WebGL-only field?
OK, then divisor=0 is not our problem, its edge case where instanced:false can be used. I thought its a way to make stride=0, but even stride=0 didnt work for me :( I thought its supposed to read the same value for all vertices.
Non-zero divisor: is it right that we can just multiply stride instead?
I'm not sure I understand. The stride defines "how many bytes the buffer advances per instances", not "how many instances until the buffer advances".
Let's say the divisor=2, which means that instances 0 and 1 should read the same values, and instances 1 and 2 should read different values. How can you achieve this with constant stride?
Sure, you could say that on average the buffer advances stride/divisor
bytes per instance, but we can't set this value as actual stride, since between instances 0 and 1 the buffer would advance for stride/2
bytes and the instances would read different values.
Oh wait.. I didnt understood divisor at all. ok. How do we do it on webgpu?
Like I said, I don't really know webgpu (so maybe you could tag somebody more knowledgable here), but it seems like this is not possible. As I see it, there are two options: either disallow setting divisor entirely, and lose the interoperability with underlying webgl API for some niche edge cases, or make the divisor field webgl-only, preferably with some note about it in the docs.
I like niche cases, I wonder whats the case for divisor.
You mean for divisor other than 0 or 1? If you have attributes that advance on a slower rate than others, for example, changing colors every two instances, while switching transformation matrices every instance.
OK, I understand it for fancy demos, but I cant imagine real case where colors advance exactly every two instances.
If you want to render large amounts of elements using limited color palette, the most efficient way to do so is by using attribute divisor > 1, the only alternative being wasting large amounts of gpu memory on duplicating the same colors over and over. It's easy to dismiss any use case that you don't understand as "fancy demo", but I still haven't heard any actual problems with my implementation, which I would be happy to fix. This discussion seems to be getting rather unproductive, and I would like to hear some other maintainer's opinion on the issue @Zyie @GoodBoyDigital
If you want to render large amounts of elements using limited color palette,
ah, yes, that's the real case :) I have that in current project, I solved it with float/int textures, I didnt even think about divisor like that. Currently float_textures/webgpu_storage_buffers aren't supported good enough in pixi :(
Anyway, if you are going to that kind of optimizations - I recommend to clone pixi , v8 is much more "clonable" than v7, and use the build from your own git repo (can be set up for npm), update it sometimes from upstream.
My stance on the problem - we should not support features that lead people to very limited dead ends, we have to give workarounds through better ways. For example, I would completely remove the use of big UBO's that our graphics geometry has, storage_buffers/float_textures are much better and have cleaner code.
If @Zyie or @GoodBoyDigital support divisor, I'll approve too, but I will try to remove it to as soon as storage is introduced.
Heya! thanks for the PR @codedpalette. Its interesting, WebGPU does not support divisors in its instancing, which means this would only work on WebGL. Was digging around It seems that some modern API like Vulkan have decided to not take this feature also. I guess storage_buffers/float_textures would be the alternative in this case!
Either way, I see no harm in adding this. A couple of tweaks to make sure its does not catch WebGPU peeps out:
- We would be good to add a warning if the WebGPU renderer is being used.
- Make sure the docs are clear that this is a WebGL only feature.
thanks!
heya @codedpalette! just following up here, let me know if you are cool to modify this PR so we can merge in :)
Cheers man!
hey @GoodBoyDigital, thanks for constructive feedback! I've updated the docs on divisor field to specify that it is a WebGL only feature, and added a warning to WebGPU pipeline
@GoodBoyDigital Done!