Zhang Le

Results 17 comments of Zhang Le

@Leavingseason may have some ideas about this.

I think it depends on many factors like * data availability - interaction data, feature-typed data, graph data. * problem setting - recall, ranking, etc. * computing engine - big...

The repository is built collaboratively by developers from all across the world. So definitely you can make the contribution, as long as it complies with the [guide](https://github.com/microsoft/recommenders/wiki/Coding-Guidelines). 👯

The document of Spark MLlib indeed says the MAP is calculated based on the actual relevant item. However, I also found that the formula used in precision@k on the same...

Perhaps another way is to use `*args` and then check the existence of rating column in `args` when `relevancy_method` is not "top_k".

@almudenasanz Thanks for the revision. Yea it is necessary to have integration test but we can put it onto next PR if it works better for you.

@riosv I guess the example on the wiki page is different from our case, in terms of how "relevance" is defined. That is, on the wiki page, the document has...

@deep-chakraborty in your example, the NDCG is scored as 1.0 because the predictions can always find a "relevance" in the ground truth no matter how they change the order. If...

@deep-chakraborty the implementation was from Spark MLlib as stated in the docstring. For your question, I think that is a known limitation of NDCG. See the following from [wiki page](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)....