scj0709
scj0709
HI! I am impressed with your paper. I am training GLEAN. I want to make the learning rate of encoder and decoder different, but how do I do it? Thanks~
I am very impressed your paper. So I want to train the restoration phase. I have a problem.  Help me! Tanks!!
Hi! I am using your algorithm very usefully. But i don't want to detect profile faces. How should I handle this? Thanks!!
### Describe the bug Hello! I'm really impressed with your code. It's well-structured and a great GitHub repository. As a result, I'd like to train the LibriSpeech Transformer model following...
Hello! I was very impressed by your paper. I am interested in trying out the training myself. Do you happen to have weights for a pretrained VQGAN model on faces...
Hello. I was deeply impressed by your paper. Therefore, I would like to apply your paper's method to other networks. Can this method be applied to other models such as...
Hello! I was really impressed by your paper, so I tried to train it using your train code. When I did inference and set the w value to 0.7, the...
Hello! Thank you for finding my code useful! I have a question though. Does your code also calculate the FLOPS of a Transformer model that you have coded yourself? For...