captum icon indicating copy to clipboard operation
captum copied to clipboard

Add an internal_batch_size argument in the attribution function of DeepLiftShap

Open znacer opened this issue 1 year ago • 0 comments

🚀 Feature

Add an internal_batch_size argument in the attribution function of DeepLiftShap.

Motivation

Using DeepLiftShap with relatively large baselines need a lot of GPU memory. For a sample of a 1000 examples, I would need more than 20 GB of memory.

Pitch

An internal_batch_size argument in the attribution function of DeepLiftShap, as exists in IntegratedGradient, would be helpfull to limit memory usage.

Alternatives

The alternative solution I found is to compute DeepLift attribution myself one by one and average manualy.

znacer avatar Aug 30 '23 09:08 znacer