semi-supervised-pytorch
semi-supervised-pytorch copied to clipboard
mean over categorical distribution
I don't quite understand the line sample = torch.mean(sample, dim=1) (link) defined in the forward of class GumbelSoftmax(Stochastic); it takes mean over dim=1 which is self.n_distributions, leaving the self.out_features which is the input dimension z_dim for the decoder self.decoder = Perceptron([z_dim, *reversed(h_dim), x_dim], output_activation=F.sigmoid). I think it should not take the mean, and the input dimension of the decoder should be the same as the output dimension of sampling layer self.logits = nn.Linear(in_features, n_distributions*out_features) which is n_distributions*out_features instead of only out_features (==z_dim).