pymde icon indicating copy to clipboard operation
pymde copied to clipboard

[Feature] Other distances besides Euclidean

Open akshayka opened this issue 4 years ago • 4 comments

The quality of an embedding in PyMDE is judged by the collection of of Euclidean distances between pairs of embedding distances.

Euclidean distance is natural for visualization, since it is the distance that humans use in the real world. It is also closely related to the standardization constraint (which puts an upper bound on the sum of squared Euclidean distances between embedding vectors).

There is nothing in the underlying optimization algorithm or code that requires the distances to be Euclidean, and the code could easily be extended to support other distances.

If this is something that you actively want, please react with a :+1: on this post.

akshayka avatar Apr 22 '21 18:04 akshayka

Hi @akshayka

Thanks for asking for feedback. With my datasets, it seems that minkowski with p < 1 is a better choice to deal with the issue of distance concentration. I noticed that PyNNDescent supports several metrics including custom ones.

ivan-marroquin avatar Jun 17 '21 20:06 ivan-marroquin

@ivan-marroquin ,

To clarify, do you want to use a different metric to measure the k-nearest neighbors of original data? Or do you want to use a different metric to measure distances in the embedding?

I'm guessing the former, because you mentioned PyNNDescent. But just thought I'd double check.

akshayka avatar Jul 03 '21 16:07 akshayka

Hi @akshayka

Correct, I believe that it will be beneficial to use PyNNDescent to measure the -k-nearest neighbors, so then PyMDE can be used to compute a lower dimension while preserving neighbors.

Ivan

ivan-marroquin avatar Jul 04 '21 17:07 ivan-marroquin

Can you please provide the Tanimoto distance (often used to compare molecular fingerprints) and Gower's distance (for mixed data records like patient data) as additional metrics in PyMDE?

Thanks, Torsten

schinto avatar Aug 30 '21 08:08 schinto