spconv icon indicating copy to clipboard operation
spconv copied to clipboard

Deterministic execution time

Open ahmedius2 opened this issue 1 year ago • 3 comments

I am trying to test how the convolution time changes based on input size. I see that the processing time can increase when a part of the input is filtered/reduced, which is unexpected. Is there a way to make spconv execution time deterministic based on input size?

ahmedius2 avatar May 04 '23 02:05 ahmedius2

Same. When I downsample *4, the inference time even increase a little bit, I am digging in with some toy example, but anyone who know it please point it out.

lixinghe1999 avatar May 09 '23 10:05 lixinghe1999

I have tested the inference time as the followings: 16000 points with normal distribution: 8.13ms 4000 points with normal distribution: 4.2ms 16000 points with box distribution: 3.28ms 4000 points with box distribution: 3.27ms. Platform: OpenPCDet, second-net, kitti, 3090

My intuition is that, a point cloud with a better object shape will be faster on spconv, which may due to the memory I/O (in other words, we can access neighbor points conveniently if there exists an object, which is not applicable to normal distribution). As a result, a real point cloud is closer to objects but not normal distribution, so it will run fast but not receive acceleration when we downsample the point cloud.

Further, I get the following result: 600000 points with box distribution: 6.9ms 300000 points with box distribution: 5.0ms which partially support my conclusion that we need a heavy computation overhead to challenge the model.

@ahmedius2 Have you tested on a weaker device, like Jetson Xavier? I guess when the computation resource is limited, memory bound will not be important. Note that I am not 100% confident since I haven't checked the implementation of spconv.

lixinghe1999 avatar May 11 '23 06:05 lixinghe1999

I tested on Jetson Xavier AGX uniformly distributed fixed number of voxels which are placed on differently sized areas. I evaluated using VoxelNeXt's spconv backed 3D backbone. Here are some results.

Area sizes and execution times for 1500 voxels : 80x80 - 36 ms 80x400 - 45 ms 400x80 - 45 ms 400x400 - 61 ms

Area sizes and execution times for num voxels 15000: 80x80 - 46 ms 80x4000 - 90 ms 4000x80 - 90 ms 4000x4000 - 61 ms

Having voxels close to each other appears to reduce the execution timing, but the opposite can also happen. We need a better understanding of spconv to understand what is going on I guess.

ahmedius2 avatar May 19 '23 16:05 ahmedius2