expVAE
expVAE copied to clipboard
A question about 'encode_one_hot_batch' function on gradcam.py
I'm the one who wanted to use this model with different datasets. However, I'm having trouble getting an anomaly attention map, so I want to ask for advice.
I have a question about the function in gradcam.py python file. Like below, the function 'encode_one_hot_bath' just return mu, not using one_hot_batch. Is this configured as intended? or not developed yet?
set the target class as one others as zero. use this vector for back prop added by Lezi def encode_one_hot_batch(self, z, mu, logvar, mu_avg, logvar_avg): --one_hot_batch = torch.FloatTensor(z.size()).zero_() --return mu
Plus, if this function is implemented as intended, I want to ask which part of the code conducted the (4) equation of the paper which generating anomaly attention.
Thanks,
Same question