RahulBhalley
RahulBhalley
Thanks, joined.
Done.
I am working on updating the code to version 0.4.1 along with results in somedays. That'll remove this problem. ✌️
Thanks for the correction. Would really appreciate any PRs 😄
@mbbrodie @Arsey Yeah. I would love to present samples data from PGGAN to README. Actually I am planning to update the code to include the recent _Optimal Transport_ theory-based training...
@jtcramer What kind of hyper-parameter tuning? And most importantly why? The paper must be giving complete descriptions of hyper-parameters in experiments. 🤔
You can use PyTorch Lightning instead. It automatically parallelizes the model training across GPUs and also supports TPU with just a single argument.
Hi @xuebinqin! The v2 demo results seems incredibly awesome!! Could you please tell us when will it be released?? Thanks.
Thanks for a quick reply! Maybe you can release the model trained on a **subset of whole V2 dataset** for now. I understand that's not a correct way of presenting...
I converted LaMa (uses FFT heavily) & it doesn't load on ANE. Is FFT ANE compatible or not? @TobyRoseman