Lin-Chieh Huang
Lin-Chieh Huang
I use your optimal value formulation in the paper I found the distribution of optimal loss shwon in below  I think the statistic method is an potential tool to...
Here is my class distribuion of training dataset :  And I sample batches from multinomial distribuion where the parameters is based on the class distribuion and use the optimal...
> Oh nice work! > > Can you explain the sentence that "I use your optimal value formulation in the paper I found the distribution of optimal loss shwon in...
Here is my updated notebook https://www.kaggle.com/code/tom99763/sampling-distribuion-of-optimal-loss The way I compute this optimal loss is: Compute prior class probrability p1,...pC based on the training data Iteration 50000 times: 1. sample a...
I'm going to publish my paper, and I use your idea in my paper but I do image-to-image translation task The amazing thing is when my model converges the loss...
> Your work is so amazing. When I use the interpolation in the texture code, it will change the structure of source picture slightly. In the mountain dataset, for example,...
> That is correct, in that using smaller patch size will help. You can also make the structure code larger by reducing the number of downsampling steps in the encoder....
> > That is correct, in that using smaller patch size will help. You can also make the structure code larger by reducing the number of downsampling steps in the...
> hi @taesungp great piece of work, I trained it on my dataset of 50k images for 50 Mil iterations as you suggested, on testing time the results are quite...
> Hi @tom99763, I suppose it's because the patch discriminator becomes stronger during the course of training, encouraging the generator to make more changes. You can try the following two...