xplique icon indicating copy to clipboard operation
xplique copied to clipboard

[Bug]: - Lime do not work for segmentation

Open AntoninPoche opened this issue 1 year ago • 0 comments

Module

Attributions

Current Behavior

Lime do not work with the segmentation operator on real models. However, Kernelshap works, it may be due to the constraints of the linear model in Lime.

Expected Behavior

Make it work or say that it is not compatible.

Version

1.2.0

Environment

- OS:
- Python version:
- Tensorflow version:
- Packages used version:

Relevant log output

---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-16-abed90b21189> in <cell line: 13>()
     20 
     21     # compute explanations
---> 22     explanation = explainer(inputs, targets)
     23 
     24     # show explanations for a method

8 frames
/usr/local/lib/python3.10/dist-packages/xplique/attributions/base.py in __call__(self, inputs, labels)
    111                  labels: tf.Tensor) -> tf.Tensor:
    112         """Explain alias"""
--> 113         return self.explain(inputs, labels)
    114 
    115 

/usr/local/lib/python3.10/dist-packages/xplique/attributions/base.py in sanitize(self, inputs, targets, *args)
     30         inputs, targets = tensor_sanitize(inputs, targets)
     31         # then enter the explanation function
---> 32         return explanation_method(self, inputs, targets, *args)
     33 
     34     return sanitize

/usr/local/lib/python3.10/dist-packages/xplique/attributions/lime.py in explain(self, inputs, targets)
    211         batch_size = self.batch_size or self.nb_samples
    212 
--> 213         return Lime._compute(self.model,
    214                             batch_size,
    215                             inputs,

/usr/local/lib/python3.10/dist-packages/xplique/attributions/lime.py in _compute(model, batch_size, inputs, targets, inference_function, interpretable_model, similarity_kernel, pertub_func, ref_value, map_to_interpret_space, nb_samples)
    330             explain_model = interpretable_model
    331 
--> 332             explain_model.fit(
    333                 interpret_samples.numpy(),
    334                 perturbed_targets.numpy(),

/usr/local/lib/python3.10/dist-packages/sklearn/linear_model/_ridge.py in fit(self, X, y, sample_weight)
   1132             y_numeric=True,
   1133         )
-> 1134         return super().fit(X, y, sample_weight=sample_weight)
   1135 
   1136 

/usr/local/lib/python3.10/dist-packages/sklearn/linear_model/_ridge.py in fit(self, X, y, sample_weight)
    864 
    865         # when X is sparse we only remove offset from y
--> 866         X, y, X_offset, y_offset, X_scale = _preprocess_data(
    867             X,
    868             y,

/usr/local/lib/python3.10/dist-packages/sklearn/linear_model/_base.py in _preprocess_data(X, y, fit_intercept, normalize, copy, sample_weight, check_input)
    250                 )
    251             else:
--> 252                 X_offset = np.average(X, axis=0, weights=sample_weight)
    253 
    254             X_offset = X_offset.astype(X.dtype, copy=False)

/usr/local/lib/python3.10/dist-packages/numpy/core/overrides.py in average(*args, **kwargs)

/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py in average(a, axis, weights, returned, keepdims)
    545         scl = wgt.sum(axis=axis, dtype=result_dtype, **keepdims_kw)
    546         if np.any(scl == 0.0):
--> 547             raise ZeroDivisionError(
    548                 "Weights sum to zero, can't be normalized")
    549 

ZeroDivisionError: Weights sum to zero, can't be normalized

To Reproduce

Use semantic segmentation tutorial and use the Lime method in it.

AntoninPoche avatar Oct 06 '23 13:10 AntoninPoche