alibi icon indicating copy to clipboard operation
alibi copied to clipboard

Allow `IntegratedGradients.target` to be inferred for classification models with class score

Open sakoush opened this issue 3 years ago • 6 comments

As part of ongoing work to integrate alibi runtime in mlserver, we realised that for IntegratedGradients if the inference model outputs a class score (e.g mnist model handwritten digit recognition), a target index is required to be passed to IntegratedGradients.explain

i.e.

target = model(X_test_sample).numpy().argmax(axis=1)

I see why it is required but at the same time I am not sure how we can easily handle this kind of custom code in production / deployment.

seldon-core v1 does have it fixed in the IG explainer model wrapper but it is restrictive i.e. we are not sure it will generalises to all use cases.

Having a discussion with @jklaise we could consider some changes to alibi to make it easier in this case.

sakoush avatar Oct 04 '21 08:10 sakoush

Here the idea is that IntegratedGradients computes attributions with respect to a scalar output, so naturally, if the output is not scalar we need more information.

The interesting bit is that in the use case of classification with probability outputs the output is 2-D n_instances x n_classes and the target is dependent on the model prediction, i.e. argmax across the n_classes axis. Of course, we want to support this type of "model-dependent target" internally without having to perform a model call external to the explainer.

One option is to extend target to take a set of pre-defined actions, i.e. if target='argmax', then it is interpreted internally as "find the scalar index of a general N-D prediction output by taking the argmax over each of the axis except the first one").

The captum library has a good entry on this for reference: https://captum.ai/docs/faq#how-do-i-set-the-target-parameter-to-an-attribution-method. I'm not sure if our implementation currently supports arbitrary indexing to get a scalar from a general N-D prediction output like they do.

jklaise avatar Oct 20 '21 16:10 jklaise

Having thought more about it, it seems to me the cleanest and most Pythonic way to solve this is via callbacks. Instead of extending the definition of target to take a restricted set of strings such as argmax that map to internal functions, allow the user passing the function in question directly (and maybe disambiguate the semantics by introducing another kwarg target_fn with the expectation that only one of target or target_fn can be not None). In this example target_fn=np.argmax (modulo some partials with axis), but the only requirement on target_fn is that it maps (batched) vectors to scalars.

(There is a lot of precedent for callbacks as it leads to powerful Pythonic code, but in particular, captum use this for their Neuron* explainers in exactly the same way - user defined callable that selects the neuron of interest within the layer.)

Of course, this brings us to the old issue of how to specify callbacks from outside of Python, e.g. in a config file. It would be silly for someone to pickle a built-in np.argmax function just so that together with a config file the explainer can be fully spec'ed. This leads me to think more that we should embrace catalogue internally, create internal alibi registries for functions that can be used as callbacks in alibi code and then enable users to refer to them via their registered strings either in config files (which resolve to the alibi function at runtime) or accepting the registered strings to the explainer constructor. (Note that accepting these registered strings seems like the same solution as proposed in the previous comment, but the crucial difference is that arbitrary callbacks would be supported if the user wishes).

I think the overarching theme is that we should not shy away from callbacks in public Python APIs but embrace them (alibi-detect already does this extensively), at the same time finding a solution for how to specify them via string keys from outside of Python.

Would be keen to hear your thoughts @ascillitoe @sakoush @arnaudvl.

jklaise avatar Nov 09 '21 10:11 jklaise

I agree, target_fn seems like a really nice option. The similarity with preprocess_fn in alibi-detect means we can of course reuse lots of the functionality between the two libraries, but more importantly I'd hope the similarity across the libraries would improve the user experience here.

ascillitoe avatar Nov 09 '21 10:11 ascillitoe

I think the overarching theme is that we should not shy away from callbacks in public Python APIs but embrace them (alibi-detect already does this extensively), at the same time finding a solution for how to specify them via string keys from outside of Python. yes this is great. This pattern can be used in other areas where we need it such as in AnchorText where the user can specify a string for language_model that are registered from within alibi.

sakoush avatar Nov 10 '21 08:11 sakoush

Brief note re using a catalogue type function registry for target_fn, we will need to think about whether we want the explainers themselves to accept registry strings and resolve to a callable (in which case the explainer signatures will change). Or do we adopt the current Alibi Detect strategy where registered functions e.g. preprocess_fn can only be specified in a config file, and separate utility functions first resolve the string registry before passing the callable to the detector/explainer (in which case the detector/explainer signature stays the same).

@jklaise is #523 going to close this issue? If so shall we continue this discussion elsewhere?

ascillitoe avatar Dec 01 '21 09:12 ascillitoe

I like the idea of separating the public API so that strings would be only accepted via config and implementation works on callables only as that would spare us having to "upgrade" input argument types to things like Union[Callable, Literal['argmax', 'something_else]]. This is a bit fuzzy though as we already use string keys to refer to functions elsewhere, e.g. segmentation_fn in AnchorImage.

We can keep this issue open but likely carry on the discussion relevant to string keys in a new one once we get to it.

jklaise avatar Dec 01 '21 10:12 jklaise