template_ffd
template_ffd copied to clipboard
Why is emd much larger than cd?
Hi @jackd , Thanks for your help months ago and I finally came out with sth. but still have a question. For two point clouds, I got a emd value 1e6 larger than cd. Maybe the normalization influences the ratio but the emd is still much larger than cd. How can that happen? cd is finding a nearest point in S2 for a point in S1 and add all distance between point pairs. emd is finding point to point mapping between two point clouds and add these distances all together too. Am I right?
Conceptually you're correct, but importantly Chamfer distance uses the squared distances, while emd uses the actual distances. A quick glance at my code reveals a possible factor of 2 discreprancy (numbers reported in the paper are the result of what I said I did in the paper - but if I generated the results without the factor of 2 then applied it to the result while cleaning up the paper I may not have updated the code).
If you wrote a modified chamfer that summed actual distances (rather than squares) I'm fairly sure that value would be less or equal to the EMD - but I wouldn't expect the different to be enormous except in very rare situations.
The results reported in the paper are about a factor of 5-10 different (emd larger than chamfer).
I'm afraid I won't have much time to look into this right now - heading overseas in a week and away for a month. It sounds like either a problem with weight loading or the evaluation system which I've come to hate (I do things very differently now - live and learn :S). If you can provide an example where the results are as different as you say using (numpy implementation)[https://github.com/jackd/template_ffd/blob/master/metrics/np_impl.py] I'll look into it. If the error is somewhere between the model output and the evaluation system, it might be best just to write a basic loop that does the computation using the numpy implementation.
I know that cd is a squraed distance while emd is a actual distance. I compute emd following implementation here. with the following code.
match = approx_match(xyz1, xyz2)
emd = tf.reduce_mean(match_cost(xyz1, xyz2, match))
And many other papers such as Pixel2Mesh ,3D-LMNET and DensePCR all follow this implementaion.
Evaluation code from Pixel2Mesh also presents codes like below in eval.py
cd = (sum_cd[item] / number) * 1000 #cd is the mean of all distance, cd is L2
emd = (sum_emd[item] / number) * 0.01 #emd is the sum of all distance, emd is L1
Emd is like 1e5 larger than cd..
As for your implementation, I cannot find a pyemd.py in your code here
from pyemd import emd