KPConv-PyTorch icon indicating copy to clipboard operation
KPConv-PyTorch copied to clipboard

Input dimensions and feature handling issues in S3DIS dataset model training

Open gitKincses opened this issue 5 months ago • 0 comments

Hi! I'm training a model using the S3DIS dataset, and I have some questions regarding input dimensions and features.

  1. What does in_features_dim represent? This value is set to 5, but I'm unsure what it includes, considering the S3DIS dataset contains 6 features: x, y, z, red, green, and blue.
  2. On line 167 of trainer.py, the line for batch in training_loader: retrieves a batch.features tensor with the shape (61700, 5), which is then cloned into x in architectures.py. However, I can't understand how the batches are created, as I can't find any tensor in the training_loader with the shape (*, 5).
  3. I've been debugging to understand what values I need to modify to adapt the code for the DALES dataset, which contains x, y, z, intensity. However, I haven't been able to make the correct changes, and I encounter the following error: File "/home/hqu/KPConv-PyTorch/models/blocks.py", line 372, in forward kernel_outputs = torch.matmul(weighted_features, self.weights) RuntimeError: Expected size for first two dimensions of batch2 tensor to be: [15, 3] but got: [15, 4].

gitKincses avatar Sep 11 '24 06:09 gitKincses