Xiaoyang Wu
Xiaoyang Wu
Hi, the error might be caused by a large `num_worker`, your local machine is running out of memory. (I am not sure, someone tells me in one issue)
Hi, released weight is trained with code tag with v1.5.1. Current code modified the model structure.
Hi, I attached one of my configs for PTv3 + Instance Segmentation (experimental version), I forgot which is the best, but hope it can be a reference. I will check...
Hi all, I recently found that `.pth` file by `torch.save` is extremely solve in loading compared with `npy` file by `np.save`. I will shift the data structure to the following...
> > Hi, it might be because the original point cloud is too huge and there are too much of pieces of sub-point cloud need to be predict during testing...
> When executing an instruction, the system displays an error that cannot recognize CUDA home. How should I resolve this issue? export CUDA_HOME=xxxx
In v1.5.2, we choose to cache the full prediction for exporting the result to Waymo Benchmark. Yet in the previous version, it cache downsampled point cloud.
> I do have 1.5.2, but I still get down-sampled label files. https://github.com/Pointcept/Pointcept/blob/main/pointcept/engines/test.py#L216 Here, we save prediction after inverse to the original scale. > Before that, if I still use...
Hi, you can adjust the default grid size in both data transform and model. Splitting into smaller region is also a good choice.
Sorry for the late response. It is not precise as the evaluate point cloud is downsampled. Refer to the config here (https://github.com/Pointcept/Pointcept/blob/main/configs/s3dis/semseg-pt-v3m1-0-base.py#L139-L164) for S3DIS with the config for ScanNet. The...