pycaret icon indicating copy to clipboard operation
pycaret copied to clipboard

[BUG]: Custom metrics all report 0.0000 for classification in Pycaret 3.3.1

Open JeffCanadaUser opened this issue 1 year ago • 6 comments

pycaret version checks

  • [X] I have checked that this issue has not already been reported here.

  • [X] I have confirmed this bug exists on the latest version of pycaret.

  • [x] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).

Issue Description

The AUC 0.0000 metric was just fixed in Pycaret 3.3.1 but custom metrics still seems to be broken. This bug is present in 3.3.0 and 3.3.1 but working properly in 3.2.0. Tested using Python 3.9.18.

Using the example for custom metrics from the Binary Classification tutorial ( https://github.com/pycaret/pycaret/blob/master/tutorials/Tutorial%20-%20Binary%20Classification.ipynb ) the metric results from any binary classification is always 0.0000

This is the case for create_model() and compare_models()

image

and

image

Reproducible Example

import pycaret
pycaret.__version__
# loading sample dataset from pycaret dataset module
from pycaret.datasets import get_data
data = get_data('diabetes')
# import pycaret classification and init setup
from pycaret.classification import *
s = setup(data, target = 'Class variable', session_id = 123)

# create a custom function
import numpy as np

def custom_metric(y, y_pred):
    tp = np.where((y_pred==1) & (y==1), (100), 0)
    fp = np.where((y_pred==1) & (y==0), -5, 0)
    return np.sum([tp,fp])

# add metric to PyCaret
add_metric('custom_metric', 'Custom Metric', custom_metric)

rf = create_model('rf')

compare_models()

Expected Behavior

Expected behaviour with works fine in Pycaret 3.2.0 is:

image

Actual Results

Actual results are all 0.00000 for each "Custom Metric" entry in the table

Installed Versions

System: python: 3.9.18 | packaged by conda-forge | (main, Dec 23 2023, 16:33:10) [GCC 12.3.0] executable: /opt/conda/envs/data-sci/bin/python machine: Linux-5.15.0-101-generic-x86_64-with-glibc2.35

PyCaret required dependencies: pip: 24.0 setuptools: 69.1.1 pycaret: 3.3.0 IPython: 8.12.3 ipywidgets: 8.1.2 tqdm: 4.66.2 numpy: 1.25.2 pandas: 2.1.4 jinja2: 3.1.3 scipy: 1.10.1 joblib: 1.3.2 sklearn: 1.4.1.post1 pyod: 1.1.3 imblearn: 0.12.0 category_encoders: 2.6.3 lightgbm: 4.3.0 numba: 0.58.1 requests: 2.31.0 matplotlib: 3.7.5 scikitplot: 0.3.7 yellowbrick: 1.5 plotly: 5.19.0 plotly-resampler: Not installed kaleido: 0.2.1 schemdraw: 0.15 statsmodels: 0.14.1 sktime: 0.26.1 tbats: 1.1.3 pmdarima: 2.0.4 psutil: 5.9.8 markupsafe: 2.1.5 pickle5: Not installed cloudpickle: 3.0.0 deprecation: 2.1.0 xxhash: 3.4.1 wurlitzer: 3.0.3

PyCaret optional dependencies: shap: 0.44.1 interpret: Not installed umap: Not installed ydata_profiling: 4.6.5 explainerdashboard: Not installed autoviz: Not installed fairlearn: Not installed deepchecks: Not installed xgboost: 2.0.3 catboost: 1.2.3 kmodes: Not installed mlxtend: Not installed statsforecast: 1.7.3 tune_sklearn: Not installed ray: Not installed hyperopt: Not installed optuna: Not installed skopt: Not installed mlflow: Not installed gradio: Not installed fastapi: Not installed uvicorn: Not installed m2cgen: Not installed evidently: Not installed fugue: 0.8.7 streamlit: Not installed prophet: 1.1.5

JeffCanadaUser avatar Apr 16 '24 14:04 JeffCanadaUser

ping @moezali1 @Yard1

celestinoxp avatar Jun 17 '24 20:06 celestinoxp

@moezali1 @Yard1

kuanhan avatar Jul 20 '24 19:07 kuanhan

This should work for binary classification. I haven't figured out multiclass classification yet.

def custom_metric(y, y_pred, **kwargs):
    tp = np.where((y_pred==1) & (y==1), (100), 0)
    fp = np.where((y_pred==1) & (y==0), -5, 0)
    return np.sum([tp,fp])

CMobley7 avatar Aug 01 '24 20:08 CMobley7

@CMobley7 this did not work for me when I tried it in the tutorial, even after updating my pycaret from github with pip install git+https://github.com/pycaret/pycaret.git@master --upgrade . Can you give some guidance on how did you make this work ? Thanks!

cantones1 avatar Aug 15 '24 20:08 cantones1

@CMobley7 this did not work for me when I tried it in the tutorial, even after updating my pycaret from github with pip install git+https://github.com/pycaret/pycaret.git@master --upgrade . Can you give some guidance on how did you make this work ? Thanks!

Sadly, it didn't work for me either.

kuanhan avatar Aug 15 '24 20:08 kuanhan

ooops..., @kuanhan and @CMobley7 , I think the issue is that there was no "import numpy as np" anywhere in my code

So this actually did work for me:

import numpy as np

def custom_metric(y, y_pred, **kwargs):
    tp = np.where((y_pred==1) & (y==1), (100), 0)
    fp = np.where((y_pred==1) & (y==0), -5, 0)
    return np.sum([tp,fp])

add_metric('custom_metric', 'Custom Metric', custom_metric)

compare_models

cantones1 avatar Aug 16 '24 06:08 cantones1