Xiaoyang Wu
Xiaoyang Wu
Hi, no Cuda environment is installed (or exported) in your local environment.
Currently, the most simple way to solve the problem is to install a conda environment with the following environment.yml ``` name: pointcept-torch2.3.1-cu12.1 channels: - pyg - pytorch - nvidia/label/cuda-12.1.1 -...
Hi, you can preprocess the Toronto3d and SensatUrban datasets as our processed ScanNet structure: ``` DATASET | - train | - scene0000.pth | - ... | - val | -...
1. Did you mean file under the following link: https://github.com/Pointcept/Pointcept/blob/main/pointcept/datasets/s3dis.py 2. Yes, but each key does not contain "room" and "s", e.g. "room_coords" -> "coord", "room_colors" -> "color".
Here (https://github.com/Pointcept/Pointcept/blob/main/pointcept/datasets/s3dis.py#L22), when we import the file, the S3DIS dataset is registered by DATASET. Consequently, when we build a dataset, we only need a specific "type" and the DATASET will...
1. Oops, it might be a bug. I think I intended to turn "Area" into "area"; And currently, it just keeps "Area" as "Area". I will remove the replacement in...
Hi, you can add support to your custom dataset by modifying the `Collect` config in `data.train(val,test).transform` pipeline (especially "feat_keys"), and `in_channels` in the Model config.
This is a normal phenomenon for point cloud training. By my understanding (might not be correct), it is caused by the dynamic shape of point cloud data -- the GPU...
Hey, it seems like caused by the LovaszLoss process segment with all label == ignore index. There is 2 solutions: 1. prevent compute loss in validation mode 2. remove all...
> ## In this PR, > The main idea is to log the semantic segmentation (IOU, Accuracy) metrics during training so that they can be compared with the validation metrics...