ariel415el
ariel415el
Thanks @williantrevizan, Your fix worked for me
Hi, I'm also traying to train this repo. What image resolution are you using? In the paper (Appendix B) they say they trained 256x256 CelebA-HQ for 500k steps of 64...
Thanks @Smith42 , The thing is for me and @qshzh the train loss plateaus so I'm not sure how more steps can help. Did your loss continue decreasing throughout training?...
Also, they are using the GT labels of MNIST which up to my knowledge not what they are supposed to do
No, when I changed https://github.com/adobe-research/sam_inversion/blob/4852a2a033ac5af981f91b9eb2baa3df6e2229fa/src/sam_inv_optimization.py#L141 to T_full = build_t(W=512, H=384) it didnt work but T_full = build_t(W=256, H=192) worked. Can you explain this? How can I run inference on bigger...
@CaParmar you named the file **segmenter_utils.pt** instead of **.py**
I managed to do so with TRT in c++ converting this model to ONNX.
Hi, I think grid artifact are a sign of a local minma in the optimization process (patches with grid signs are out of distribution). Changing the kernel size is a...
Very interesting, I haven't noticed that. It might also be useful to use more SWD projections Please do share your conclusions.
Very good example. I tried it myself and I too get these grid effect. I'm not quite sure if the grid size is determined by the patch size which was...