yu45020
yu45020
+1 I try to swap all convolution with the gated one in the Mobile Net V2 and add another decoder for image in-painting. But the result is not good. Here...
Great! My code is used as a drop-in replacement for my current project, so there are BN and activation options. Sorry for the confusion. But I did misunderstand your paper...
@xhh232018 I am also re-implementing the paper for my project. For the first question, I guess you are right. ~If you flat the last layer into a linear with one...
@xhh232018 Not yet. I follow the structure in [deep fill](https://github.com/JiahuiYu/generative_inpainting/blob/06cd62cfca8c10c349b451fa33d9cbb786bfaa20/inpaint_model.py#L29) and swap all convolutions with gated convolution. The GAN part is similar to the paper's part. The loss stays high....
Thanks for the info. I notice the deep fill v2 removes batch norm to reduce color consistence since the author mentioned it above. But my application focuses on black/white images,...
@nagadomi I come from [this issue](https://github.com/nagadomi/waifu2x/issues/236#issuecomment-442179525). Thanks for sharing the new model. Have you tried atrous convolutions on image up-scaling? There is a [paper](https://arxiv.org/ftp/arxiv/papers/1709/1709.00179.pdf) using atrous conv to segment small...
@nagadomi Thanks for the advice! I will also check a U-Net like model before training. Your project seems to complete what I desire. It is very interesting and seems to...
Thank you very much for the paper ! I tend to favor non-GAN models as I have limited GPU power before (>
不是大神,别客气。 请用最新的commit下的 `demo_segmentation.py` 再试下。 如果没N卡的话将下面改掉 https://github.com/yu45020/Text_Segmentation_Image_Inpainting/blob/ae16690b91b5a649eaa7435ef91d04653663a4ec/Examples/demo_segmentation.py#L57 ```python # model = model.cuda() ``` https://github.com/yu45020/Text_Segmentation_Image_Inpainting/blob/ae16690b91b5a649eaa7435ef91d04653663a4ec/Examples/demo_segmentation.py#L66 ```python for i in evalset: process(i, 'cpu') ``` 我最近几乎重构了整个项目,改的面目全非了 ...... text segment 这一部分用收集来的图训练总是不满意,得想别的方法。
我的cv2 是4.0 ,改了api吧。 训练图都是来自于汉化组的无子图加上原图,所以模型只对漫画有效。可以图源的去字并不完全统一,导致出来的效果并不好。 字号大小问题可以直接缩小/放大输入图像大小解决。 训练代码依赖于具体模型,我就不放出来了见笑了。有兴趣的话推荐参考[这里](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)