training with custom dataset
Hello, I used a custom dataset for training. I found that the results were not good, and the situation in [issue #14] appeared.
I have 1500 samples in total, the size is 16*16, num_points=50000, and the original parameters are used. The loss is stable at around 0.8. Is it because of insufficient data, or are there any changeable parameters that I have overlooked?
the first is diff result, the second is groungtruth
Hi, could you share the model output, the ground truth, and the conditioning point cloud used? One thing that I can already mention is that apparently your ground truth data is quite sparse, so maybe you will need to tune the parameters to work in your data.
Thank you. I try to tune uncond_prob but it doesn't work
The uncond_prob parameter may not be the best to tune. First, you can set the -T parameter in the inference to 1000 so you do the scene completion inference over all the 1000 denoising steps (and for this, you don't need to re-train). Also, you can try changing the -s conditioning weight, this also affects the inference without the need to re-train it.
For the training, you can try changing beta_start and beta_end. Those are the parameters that most affect the training performance.
I tried adjusting beta, and it did have an effect on training. But my output is still bad. The predicted point cloud during training looks good. But it is not god during inference (from diff_completion_pipeline.py, -T 1000/ -s 6.0), and changing -T and -s does not improve it significantly.
You can check the -T and -s parameters used during training, they may be different. Also, you need to make sure that the betas used in the diff_completion_pipeline.py are also the same as the ones used in your new training. The diff_completion_pipeline.py also uses the betas.
Hello, I also used my own point cloud data for training, but I didn't use map_from_scans.py to generate the ground truth. Instead, I directly converted the point cloud into a NumPy array. Later, I encountered the following issue. Have you experienced a similar situation?
Hello, I also used my own point cloud data for training, but I didn't use map_from_scans.py to generate the ground truth. Instead, I directly converted the point cloud into a NumPy array. Later, I encountered the following issue. Have you experienced a similar situation?
Hi! My guess is that after the transformations done in this line, the output point cloud may be empty. You can check it by commenting this line.
I will now close this issue. If needed, feel free to reopen it.