MLmetrics icon indicating copy to clipboard operation
MLmetrics copied to clipboard

Machine Learning Evaluation Metrics

Results 10 MLmetrics issues
Sort by recently updated
recently updated
newest added

When trying to compute R-Squared from predicted and actual values, the results from R2_Score() did not match other methods. (they were very large and negative). Here is some R-Code to...

When I use AUC with more than 100k observations, the AUC calculation has an integer overflow and returns NA, see also [this](https://stackoverflow.com/questions/28613635/r-metrics-auc-error-message) SO question. For reproducibility: ```r set.seed(15) N

Hi, wondering how you calculated F1 score. Wikipedia says: ![image](https://user-images.githubusercontent.com/35657106/92929396-23e19e00-f449-11ea-8b80-d0fadddb7002.png) Here is how I would calculate F1 (using some dummy data), yet I get a different value using the Wikipedia...

Hello, Firstly, thanks for developing this package. I have found some bugs with the `Precision()` function when **all** the true or predicted values equal the positive value. There is also...

Hi, I encounter the error: ``` Error in FUN(X[[i]], ...): only defined on a data frame with all numeric variables Traceback: 1. MLmetrics::F1_Score(y_true, y_pred_resp) 2. Precision(y_true, y_pred, positive) 3. Summary.data.frame(structure(list(Freq...

Hello. Why in MAPE function dont multiply 100%?

Just added support for multi-class classification metrics such as: * Precision (micro and macro averages) * Recall (micro and macro averages) * F1 Score (micro and macro averages) As described...

First of all thanks for the great package. I was wondering if it would be possible to have micro/macro statistics (precision, recall, F1 score) for multi-class classification in the future....

Thank you for your outstanding package. I'm perplexed why the default for the "positive" argument for binary metrics (e.g., sensitivity) appears to be "0" (or the lower value). This seems...