CSRNet-pytorch
CSRNet-pytorch copied to clipboard
About get points coordinates from the result density map
Thank you for releasing the code. The net use the density map as the Groudtruth. I wonder Is it possible to generate the points coordinates from the density map ?
Hi, just out of couriosity, why dont you use the original ground-truth annotation files (.mat files) if you need to find the point coordinates? Because I am afraid that de-bluring the density map could be a lot more difficult than just reading the annotations.
I think what you are asking has something to do with image de-bluring, gaussian deconvolution or image sharpening.
@BedirYilmaz yeah, I'm focusing on the head location(annotated) of the crowd, the most naive idea is getting the point coordinates from the predict density map.
I doubt the exactness that you may achieve after retrieving the location of the heads would be sufficient with what you get by working on a ground truth density map.
It would be even harder to do it with estimated density map, since it is a lot noisier.
I think I could not help you on this one but following might give you an idea: http://crcv.ucf.edu/projects/crowd/