Ruizhou Ding
Ruizhou Ding
Hi, Thanks for the question. Quick answer - the code is mainly for ImageNet. Due to history reasons, we used a variant of the distribution loss formulation. They underlying intuition...
Hi Hyungjun, Thanks for pointing this out. I just tried changing the first layer to: self.features0 = nn.Sequential( nn.Conv2d(self.channels[0], self.channels[1], kernel_size=11, stride=4, padding=2), nn.MaxPool2d(kernel_size=3, stride=2), nn.LeakyReLU(inplace=True), nn.BatchNorm2d(self.channels[1]), ) self.features1 =...
Exactly the same to me..
Thanks, I solved it by not using "baseline" in the first layer, but using "theano".. (which I do not believe is really a solution)