PyExplainer icon indicating copy to clipboard operation
PyExplainer copied to clipboard

Questions about the results obtained by XAI method

Open 9527-ly opened this issue 2 years ago • 0 comments

I found a strange phenomenon. For the same model, the same training sample and test sample, other operations are identical. Theoretically, the values obtained by using the XAI method (like Saliency) to evaluate the interpretability of the model should be the same. However, I retrained a new model, and the interpretability values obtained are completely different from those obtained from the previous model. Does anyone know why this happens? The interpretability value is completely unstable, and the results cannot be reproduced. Unless I completely save this model after training it, and then reload this parameter, the results will be the same. Does anyone know why

9527-ly avatar Oct 27 '22 09:10 9527-ly