SP-GAN icon indicating copy to clipboard operation
SP-GAN copied to clipboard

About Evaluation

Open AnjieCheng opened this issue 4 years ago • 4 comments

Hi, thanks for releasing the code.

During training, SP-GAN uses a different data normalization approach (instance-wise normalizing the points to fit a unit ball) comparing to prior works such as PointFlow (zero-mean per axis and unit-variance globally). How about during the evaluation, how is the data preprocessed?

Would you release the evaluation code, or share how you preprocess/post-process the point cloud before evaluation?

Thank you!

AnjieCheng avatar Aug 23 '21 15:08 AnjieCheng

The only processing is on the training point set (normalizing the points to fit a unit ball), which is a common operation in most point cloud-based analysis. No other processing is needed before evaluation.

The evaluation part is the same as PointFlow and latent GAN. https://github.com/stevenygd/PointFlow/tree/master/metrics

liruihui avatar Aug 24 '21 03:08 liruihui

Thank you for responding!

Since the model is trained with normalized data, the generated point cloud is also expected to be within the same normalized scale. If no other processing is needed for evaluation, how can the evaluation metrics (e.g., chamfer distance) be accurate? I feel there are two possible solutions: 1. the testing data is also preprocessed which is normalized to a unit cube 2. de-normalize the generated point cloud back to the original scale.

https://github.com/stevenygd/PointFlow/blob/master/test.py#L114 As shown above, PointFlow denormalizes the generated shape back to the original scale before evaluation.

Please correct me if there is any misunderstanding. Thank you!

AnjieCheng avatar Aug 24 '21 12:08 AnjieCheng

Hello, it's a great job. I also have some issue about metrics. The paper doesn't seem to mention the test set, so after training with the training set, I use the generated results to calculate the data in the training set. I generated 1000 results and performed calculations with 6000 results in the training set. The results of the calculation are as follows: COV 9.08822661552 MMD 8.834443055093288 JSD 0.023300660444105503 As you can see, mmd is relatively close, and cov is far away. I want to know, where may I have not calculated correctly.

CRISZJ avatar Aug 24 '21 13:08 CRISZJ

Thank you for responding!

Since the model is trained with normalized data, the generated point cloud is also expected to be within the same normalized scale. If no other processing is needed for evaluation, how can the evaluation metrics (e.g., chamfer distance) be accurate? I feel there are two possible solutions: 1. the testing data is also preprocessed which is normalized to a unit cube 2. de-normalize the generated point cloud back to the original scale.

https://github.com/stevenygd/PointFlow/blob/master/test.py#L114 As shown above, PointFlow denormalizes the generated shape back to the original scale before evaluation.

Please correct me if there is any misunderstanding. Thank you!

Hi, have you got the number reported in the paper? @AnjieCheng

TiankaiHang avatar May 30 '22 15:05 TiankaiHang