QuanticDisaster
QuanticDisaster
I believe the model trains on the voxellized point cloud to reduce density, so voxel_size correspond to the size of voxels, voxel_max the maximum number of voxels/points in a sample...
Yes, this is how I personally understand it
I am not the author nor even one of the contributor so I can't tell you for sure but I think they use the same preprocessed data as for PAConv...
As a reference, you can take the training log [shared by a user](https://drive.google.com/drive/folders/1oXuDGvj1a8vTj72NoQNtqvb7m2r-8ehX) on one of the closed issues. With 4 GPUs you see the full training took around 16...
@aaron-h-code This is not my run, it was shared [in this issue](https://github.com/POSTECH-CVLab/point-transformer/issues/17#issuecomment-984594772) saying that the code was run on 4 RTX 3090 with the default config in this repo
I encountered the same problem and corrected it the following way : https://github.com/POSTECH-CVLab/point-transformer/issues/27 As for reproducibility you can find a log and pretrained models by a user in the closed...
> > > this issue is still going on, after 20 days of being reported. > > manually installing `websockets==9.1` continues to be the solution, while installing dependencies as-is from...