CAP-VSTNet
CAP-VSTNet copied to clipboard
[CVPR 2023] CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer
Looking forward to your reply, thank you.
These are the output of console: ``` Iteration: 00161080/00170000 content_loss:0.0000 lap_loss:0.3854 rec_loss:0.0622 style_loss:1.4862 loss_tmp:0.5256 loss_tmp_GT:0.0664 Iteration: 00161090/00170000 content_loss:0.0000 lap_loss:0.1441 rec_loss:0.1067 style_loss:0.7328 loss_tmp:0.2622 loss_tmp_GT:0.0847 Iteration: 00161100/00170000 content_loss:0.0000 lap_loss:0.0956 rec_loss:0.0610 style_loss:0.3879 loss_tmp:0.4483...
Hello, I am very interested in your work! I see that one of your quantitative indicators is Gram Loss. Can you share your calculation code? I referred to it https://github.com/ProGamerGov/neural-style-pt/tree/master...
In "Matting Laplacian [22] may result in blurry images when it is used with another network like one with an encoder-decoder architecture. But it does not have this issue in...
Thank you for your excellent work. Could you provide instructions on how to fine-tune your pretrained model using my own dataset? I'd appreciate it."
Hi! One feedback about the setup instructions is that "pip install -e . --user" seemed to add the path to all of my envs sys.path which is nonideal for me,...
Could you share the code of calculating of temporal loss?
> Then, download the pre-trained weight segformer.b5.640x640.ade.160k.pth ([google drive](https://drive.google.com/drive/folders/1GAku0G0iR9DsBxCbfENWMJ27c5lYUeQA?usp=sharing)) and save at models/segmentation/SegFormer/segformer.b5.640x640.ade.160k.pth. It seems the file is not shared for everyone. Is that intentional?
I really like the code of this paper, using very few libraries, and the parameter modifications are all available at the beginning of the paper, which is very nice. I...