expVAE icon indicating copy to clipboard operation
expVAE copied to clipboard

A question about 'encode_one_hot_batch' function on gradcam.py

Open Sangboom opened this issue 4 years ago • 1 comments

I'm the one who wanted to use this model with different datasets. However, I'm having trouble getting an anomaly attention map, so I want to ask for advice.

I have a question about the function in gradcam.py python file. Like below, the function 'encode_one_hot_bath' just return mu, not using one_hot_batch. Is this configured as intended? or not developed yet?

set the target class as one others as zero. use this vector for back prop added by Lezi def encode_one_hot_batch(self, z, mu, logvar, mu_avg, logvar_avg): --one_hot_batch = torch.FloatTensor(z.size()).zero_() --return mu

Plus, if this function is implemented as intended, I want to ask which part of the code conducted the (4) equation of the paper which generating anomaly attention.

Thanks,

Sangboom avatar Aug 16 '21 09:08 Sangboom

Same question

geighz avatar Apr 05 '23 19:04 geighz