torch-influence icon indicating copy to clipboard operation
torch-influence copied to clipboard

Influence of positive class datapoints are all higher than influence of negative class datapoints

Open chenzhiliang94 opened this issue 1 year ago • 0 comments

Hello! First of all thank you for creating this library. It was very helpful for me.

I have a simple dataset consisting of balanced number of positive and negative labels. I noticed after running the influence function calculation with a fitted model (only the classifier layer of the neural network is unfrozen, but its a few layers and not a single linear layer), the influence of positive class datapoints are higher than any negative class datapoints. Is there a theoretical reason for this happening? I inspected the data points manually and cannot find any sensible explanations.

For the record, my dataset is quite clean (all labels are correct) and my model was classifying both classes with higher accuracy. Also, I noticed when I try to use a less fitted model (trained for only a few epochs), this issue kind of goes away.

chenzhiliang94 avatar Oct 22 '24 09:10 chenzhiliang94