about the AnimeGANv2 online use website
Hi, I have something to tell you. Because there is no corresponding paper for open source AnimeGANv2, I recently made a technical description page of AnimeGANv2 using GitHub page, and found that GitHub page can't be accessed normally in China. Therefore, I plan to stop doing AnimeGANv2 online use website. If it's convenient for you, I hope you can implement it like v1.
I'd definitely like to help. I would host them both on Github on animegan.js.org and Gitee. If you'd like we could also have a chat on email about the details. I may have questions to ask in terms of implementation. One more thing, you're at Tencent right? I just see that you updated your profile.
I'm not in Tencent, and failed to enter when I graduated. If you have any questions to discuss with me, please feel free to contact me via email. Sincere thanks.
I'm trying to use AnimeGANv2 with tfjs and I found that AnimeGANv2 runs slower than the original version. Is it correct on your end? It does not run with my file (450x450), while the original one could.
In addition, I see that you have a lite version. Do you have checkpoints that I can convert to tf.js format? I think that will help people since it will be faster.
TF.js currently uses 2GB ram on my side. I think that's more than a lot of people can provide. I found that numBytesInGPUAllocated is much higher than numBytesInGPU, which means that most of the ram should be free. Do you see the same on your side in tf.js?
Regarding the lite version, I don't have its checkpoints. I have tried the lite version and the result will be poor, because the generator network still needs to increase the parameters to get better results. In addition, the v2 version of the generator is very fast on my side, and I can set a larger batch size during training. Maybe you need to check and debug a little bit. For some post-processing, such as brightness adjustment, you can remove it.
It's fast on tf.js or just TF? I think it's slow on tf.js because it uses a lot of memory and then it will use swap from disk. If you're interested, you could try https://github.com/TonyLianLong/AnimeGAN.js/tree/animegan_v2. On my side, it allocates a lot of memory but does not seem to use those memory.
{
numBytes: 10788480,
numBytesInGPU: 11326224,
numBytesInGPUAllocated: 2104447456,
numBytesInGPUFree: 2090692048,
numDataBuffers: 89,
numTensors: 89
}
TF.js does not actually train. It just runs inference. Do you apply post-processing in inference time? My generated image is a little bit white actually.

I used the tools I provided in the v2 repo to convert the tfjs model, and I haven't tried to run it yet. If you need it, you can use this tool to convert it. Or you can try the conversion method of the previous original version. I don’t know why your side consumes so much storage space, it seems strange. If it is the inference process of the tfjs model, no other post-processing methods are needed.

Yes. I think the process that I use is similar. The WEBGL_DELETE_TEXTURE_THRESHOLD in tf.js may be the case. However, I only get medium size images working. I've put my converted checkpoints in https://github.com/TonyLianLong/AnimeGAN.js/tree/animegan_v2. If you are free, you could try it/replace with the checkpoint you have.
OK, I will reply to the results of my test on weekends
I think it might be something wrong during the conversion process. I tried to use your tfjs model in the browser, and it uses the sample image to convert very slowly. When I replaced my tfjs model, an error occurred after I clicked on the page. I use the tensorflowjs-2.7.0 version to directly convert, it already supports MirrorPad op, no need to customize the implementation. I suspect it is a version problem, besides, I did not find that GPU memory is occupied very much.