AllenXiang

Results 17 comments of AllenXiang

Hi, we use 4 NVDIA GTX 2080TI GPUs for training, the batch size (64 at least) is set to occupy all the GPU space, and it takes about 300-400 epochs...

Hi, we use [keyshot](https://www.keyshot.com/) for visualization.

The format is .obj, more details are in the [visualization folder](https://github.com/AllenXiangX/SnowflakeNet/tree/main/visualization#visualization-code-for-point-splittings).

Hi, The scaling is an augmentation step that helps the network to converge faster on the c3d dataset. And only the training and validation set of c3d is scaled, which...

I'm not sure about the difference. You can train the network without scaling and check its performance on these two splits, or it's better to use the PCN dataset for...

Hi, You can change the parameters (Nc, N0 and up_factors [#](https://github.com/AllenXiangX/SnowflakeNet/blob/93e7151610765e7e2b41ace2d03c8750f0b6c80c/models/model.py#L158), [#](https://github.com/AllenXiangX/SnowflakeNet/blob/93e7151610765e7e2b41ace2d03c8750f0b6c80c/core/train_pcn.py#L55)) of the network according to your need.

Hi, Sorry for the late reply. I think the easiest way is to tune the pre-trained model on your dataset. And it's not necessary to change the file format as...

Hi, Thank you for your interest in this work. You may need to write a new script (python file) to complete your own data. The recommended steps are as follows:...

Hi, sorry for the late response. The tensor shape required by the SnowflakeNet is (b, n, 3), where b is the batch size, n is the point number and 3...

In the function "complete_point_cloud", delete the line `point_cloud = input_point_cloud.permute(0, 2, 1).contiguous()`. Because the input tensor shape is (b, n, c), not (b, c, n)