captum
captum copied to clipboard
Add an internal_batch_size argument in the attribution function of DeepLiftShap
🚀 Feature
Add an internal_batch_size argument in the attribution function of DeepLiftShap.
Motivation
Using DeepLiftShap with relatively large baselines need a lot of GPU memory. For a sample of a 1000 examples, I would need more than 20 GB of memory.
Pitch
An internal_batch_size argument in the attribution function of DeepLiftShap, as exists in IntegratedGradient, would be helpfull to limit memory usage.
Alternatives
The alternative solution I found is to compute DeepLift attribution myself one by one and average manualy.