explainerdashboard icon indicating copy to clipboard operation
explainerdashboard copied to clipboard

added dropna to avoid crash on nan values

Open AlexanderZender opened this issue 2 years ago • 17 comments

added copy to any predict or predict_proba call to only pass definitiv copies. fixed issue when categorical features have nan as values, a crash occurs added the option to allow nan category values in frontend

issue: https://github.com/oegedijk/explainerdashboard/issues/273

AlexanderZender avatar Jul 14 '23 11:07 AlexanderZender

During the initial opening sequence of the dashboard, a prediction is made using all nan values:

image

This causes an exception in the console that the encoder of the model can not deal with the passed values. This is accurate as e.g. Name is a string and is passed -999. This initial exception does not impact the dashboard as it seems, I do not know if this intended or not to always do 'nan' predictions during the initial opening of the dashboard. But I wanted to note it here.

AlexanderZender avatar Jul 14 '23 13:07 AlexanderZender

I added a 'NaN' category value to the categorical_dict if the categorical feature has NaN as values. Without this, the model encoder will break as always -999 is passed, instead of NaN which is a known/allowed value.

AlexanderZender avatar Jul 14 '23 13:07 AlexanderZender

I'm not sure why we are adding all those .copy() to the prediction calls? prediction shouldn't have any side effects on X, so why is this needed?

oegedijk avatar Aug 01 '23 11:08 oegedijk

could you add some tests that shows this works?

oegedijk avatar Aug 01 '23 11:08 oegedijk

The copy is technically not needed but it will avoid errors, for example if you pass a pipeline that manipulate the incoming dataframe within its process. These changes are indirectly applied to the dataframe within the explainer dashboard. The next time explainer dashboard will call the predict proba function, it will use the manipulated dataframe. This will cause a mismatch within the pipeline of the model, as the dataframe does not match.

You can let the user manage that, but it may not always be possible to perform a copy command within the pipeline or own code.

Edit: as your comment asked about the predict function, I was unsure if this problem also occurred there. I only observed the predict_proba function causing issues but maybe there it is unnecessary with predict? I can remove the copies for predict.

AlexanderZender avatar Aug 01 '23 11:08 AlexanderZender

@oegedijk I added the requested test, and removed copy from all predict function calls. I added a test to test categorical labels, this will fail as explainer dashboard does not currently support these.

I'm not completely sure how many changes are needed to allow categorical labels, but the test is already there with a dataset to test with. The dataset is an adjusted subset of the car dataset from OpenML Maybe this is something to look into, as there are numerous datasets with categorical labels, and as i can see explainer dashboard already converts numerical labels to strings (?)

AlexanderZender avatar Aug 01 '23 13:08 AlexanderZender

I added the requested test, and removed copy from all predict function calls.

I guess all these predict functions are only predicting a single row, so maybe the cost of adding these copy's is not so bad? Alternative would be only doing them for pipelines, but that would also introduce additional overhead.

oegedijk avatar Aug 02 '23 19:08 oegedijk

I added a test to test categorical labels, this will fail as explainer dashboard does not currently support these.

I think it should be possible to reuse the titanic test set? Just replace the 0,1 labels with 'survived', 'not survived'. For testing the NaN's, we could just randomly sprinkle some NaNs in. there.

What would be the fastest/cheapest model that works well with missing values to minimize test time? (HistGradientBoostingClassifier comes to min, but maybe there are faster options)

oegedijk avatar Aug 02 '23 19:08 oegedijk

I added the requested test, and removed copy from all predict function calls.

I guess all these predict functions are only predicting a single row, so maybe the cost of adding these copy's is not so bad? Alternative would be only doing them for pipelines, but that would also introduce additional overhead.

I think the overhead depends on the complexity of the dataset that gets used. When is the predict function used by the explainerdashboard?

AlexanderZender avatar Aug 03 '23 06:08 AlexanderZender

I added a test to test categorical labels, this will fail as explainer dashboard does not currently support these.

I think it should be possible to reuse the titanic test set? Just replace the 0,1 labels with 'survived', 'not survived'. For testing the NaN's, we could just randomly sprinkle some NaNs in. there.

What would be the fastest/cheapest model that works well with missing values to minimize test time? (HistGradientBoostingClassifier comes to min, but maybe there are faster options)

That would be possible if this does not cause issues with other tests. As I'm not familiar with the entire test suit, I avoid changing it, but I can adapt my tests.

As for the training time, I just used a RandomForestClassifier which trains in less than a second on my system. In my "pipeline" NaN values are not passed to the model, as the one hot encoder ignores them. The test checks if NaN in the original dataset do not break the dashboard, what the pipeline does with the NaN is irrelevant.

AlexanderZender avatar Aug 03 '23 06:08 AlexanderZender

I updated the test to use the available Titanic data. I perform the required manipulation in the test, e.g. adding NaN in Name or LabelEncode the label

AlexanderZender avatar Aug 03 '23 07:08 AlexanderZender

As for the training time, I just used a RandomForestClassifier which trains in less than a second on my system. In my "pipeline" NaN values are not passed to the model, as the one hot encoder ignores them. The test checks if NaN in the original dataset do not break the dashboard, what the pipeline does with the NaN is irrelevant.

Ah, apologies. I didn't fully understand then, I thought you were thinking of algorithms that are able to deal with missing values (such as e.g. HistGradientBoostingClassifier), but you mean pipelines that fill in NaNs. Both might be something we should be able to support.

oegedijk avatar Aug 03 '23 12:08 oegedijk

Will have a closer look at the code hopefully tomorrow or this weekend. But thanks already for this contribution, let's try to get it ready and released quickly!

oegedijk avatar Aug 03 '23 12:08 oegedijk

Will have a closer look at the code hopefully tomorrow or this weekend. But thanks already for this contribution, let's try to get it ready and released quickly!

Sounds good, tell me if you find any issue with it. Will you have time to take a look at the categorical labels? I think it would be great to merge this together

AlexanderZender avatar Aug 03 '23 14:08 AlexanderZender

You are correct, i removed it

AlexanderZender avatar Aug 10 '23 11:08 AlexanderZender

@oegedijk Totally forgot that there was a conflict, ups

AlexanderZender avatar Nov 03 '23 12:11 AlexanderZender

@oegedijk Any info when this would be looked at?

AlexanderZender avatar Feb 02 '24 12:02 AlexanderZender