Peng Chen
Peng Chen
Sorry @jinfagang I'm currently busing preparing my conference paper. The deadline is coming. I can spare time about mid June on this issue. If you cannot wait so long, please...
Please leave a message to my email: [email protected]
@zhaoyucong advise to employ a machine with at least 32G DDR memory to generate the data. And my best PSNR for 2x is 37.5053, didn't reach the result of the...
hi @flystarhe Did you find a solution for multi-thread loading hdf5 file?
@JYP2011 @twtygqyy In your experience, which is better for nomalization and non-nomalization? The input requirement to be nomalized into 0~1 probably might be relaxed by adding a batchnorm layer.
Hi, a update version to pytorch 1.0 could fix your problem.
根据上面讨论想到大概的代码(未验证,最近会跑跑看): ``` criterion = torch.nn.CTCLoss() # fm shape: [N, B, C] output = F.log_softmax(fm, dim=2) ctc_loss = criterion(output, target, pred_size, target_length) p = torch.exp(-ctc_loss) focal_loss = -(1-p) * torch.log(p) ```
[update] no benefit in my training with above code.
In my understanding, the lowering techniques is similar to the im2col function in the caffe. And the batching is to set a large batch size when training, distributed to CPUs...
In my experiment, origin accuracy of VGG16 on imagenet evaluation dataset is (top1/top5: 0.6835/0.88442). After 50% log quantization, the top1/top5 accuracy became to (0.63682/0.85252). The accuracy kept the same even...