web-splat icon indicating copy to clipboard operation
web-splat copied to clipboard

How to solve the problem of having 20M + points

Open meidachen opened this issue 1 year ago • 2 comments

Thank you for the great work. I'm trying your work on compressing my large-scale scene, everything works great until I try to visualize it using the viewer. It seems like there are too many points to be rendered using the current implementation. specifically, the error/limitation occurred as: Caused by: In a ComputePass note: encoder = render command encoder In a dispatch command, indirect:false note: compute pipeline = preprocess pipeline Each current dispatch group size dimension ([153754, 1, 1]) must be less or equal to 65535

Which originated from render.rs line 409: let wgs_x = (pc.num_points() as f32 / 256.0).ceil() as u32; pass.dispatch_workgroups(wgs_x, 1, 1);

Is there any work around for this to handle more points in the viewer?

Thank you in advance for the help!

meidachen avatar Apr 28 '24 22:04 meidachen

Hello,

thanks for your interest! I hope this fixes your issue:

You have to increase the limit for max_compute_workgroups_per_dimension. How high you can set it depends on the limit of your GPU driver.

For the renderer you can edit the limits here: https://github.com/KeKsBoTer/web-splat/blob/5dffdc8b259c8ecda791ed4c3aa12a154b52dbea/src/lib.rs#L100

If this does not solve the problem one would need to invoke the shader multiple times (which would require some rework of rust and shader code)

KeKsBoTer avatar Apr 29 '24 06:04 KeKsBoTer

@KeKsBoTer , thanks for your response, it seems like by default max_compute_workgroups_per_dimension is already at the maximum. What would you suggest to look into? Does this mean the data itself also needs to be chunked so that the shader can focus on each chunk and eventually merge results?

meidachen avatar Apr 29 '24 07:04 meidachen