Open3D-ML icon indicating copy to clipboard operation
Open3D-ML copied to clipboard

Some errors when testing on custom data with PointTransformer

Open Arno-Tu opened this issue 2 years ago • 0 comments

Checklist

Describe the issue

I had a problem when testing a trained PointTransformer model on custom data. RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm) or TypeError: expected Tensor as element 0 in argument 0, but got NoneType When the model is changed to RandLA-Net, the test works fine. The first problem occurs when the custom data feature part is assigned a value other than None, while the second problem occurs when the custom data feature part is assigned a value of None

Steps to reproduce the bug

import open3d.ml as _ml3d
    import open3d.ml.torch as ml3d  # or open3d.ml.tf as ml3d
    import numpy as np
    import laspy

    # define the model we used in the semantic segmentation task
    cfg_file = r"F:\Anaconda\envs\py3_pytorch\Lib\site-packages\open3d\_ml3d\configs\pointtransformer_s3dis.yml"
    cfg = _ml3d.utils.Config.load_from_file(cfg_file)
    model = ml3d.models.PointTransformer(**cfg.model)

    # Add path to the SemanticKitti dataset and your own custom dataset
    cfg.dataset['dataset_path'] = './dataset'
    cfg.dataset['custom_dataset_path'] = './pcds'

    # Load the datasets
    dataset = ml3d.datasets.S3DIS(cfg.dataset.pop('dataset_path', None), **cfg.dataset)
    custom_dataset = load_custom_dataset(cfg.dataset.pop('custom_dataset_path', None))
    print("Load dataset done!")
    pipeline = ml3d.pipelines.SemanticSegmentation(model, dataset=dataset, device= "gpu", **cfg.pipeline)

    # load a pre-trained model weight file
    ckpt_folder = "./checkpoint/"
    os.makedirs(ckpt_folder, exist_ok=True)
    ckpt_path = ckpt_folder + "ckpt_00020.pth"
    pipeline.load_ckpt(ckpt_path=ckpt_path)
    print(pipeline.device)
    print("pipeline done")


    # Load test data from dataset
    # test_split = dataset.get_split("test")
    # data = test_split.get_data(0)

    # Load test data from the custom dataset
    pc_idx = 0  # change the index to get a different point cloud
    print(custom_dataset[pc_idx])
    data, pcd = prepare_point_cloud_for_inference(custom_dataset[pc_idx])
    print("pcd done")

    # Run inference
    result = pipeline.run_inference(data)
    print("inference done")

def load_custom_dataset(dataset_path):
	print("Loading custom dataset")
	pcd_paths = glob.glob(dataset_path+"/*.pcd")
	pcds = []
	for pcd_path in pcd_paths:
		pcds.append(o3d.io.read_point_cloud(pcd_path))
	return pcds

def prepare_point_cloud_for_inference(pcd):
	# Remove NaNs and infinity values
	pcd.remove_non_finite_points()
	# Extract the xyz points
	xyz = np.asarray(pcd.points)
	# Set the points to the correct format for inference
	data = {"point":xyz, 'feat': None, dtype=np.int32), 'label':np.zeros((len(xyz),), dtype=np.int32)}

	return data, pcd

Error message

RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)

TypeError: expected Tensor as element 0 in argument 0, but got NoneType

Expected behavior

No response

Open3D, Python and System information

- Operating system: Windows 10 64-bit
- Python version: Python 3.6.12
- Open3D version: open3d-0.14
- System type: arm64
- Is this remote workstation?: yes
- How did you install Open3D?: build from source
- Compiler version (if built from source): gcc 7.5

Additional information

I have not tested other models (e.g. KPConv) to see if this problem exists; perhaps it is also due to the fact that the version of Open3D I have installed is not the latest version.

Arno-Tu avatar Feb 19 '23 12:02 Arno-Tu