thank you.
@alantian with the help of your code it was possible to implement GAN for Open Frameworks / Emscripten. Here is my code (the GAN part is basically yours, just adapted it a little...): https://github.com/Jonathhhan/ofEmscriptenExamples/tree/main/emscriptenGanAnimation And here my example: https://gan.handmadeproductions.de/ It takes some time to generate, but then it runs surprisingly well. Thank you very much. Do you want to be mentioned in the code or Github?
Edit: Actually (at least with an endless animation) there seems to be a memory leak issue. Not sure, if that is because of my changes in the code...
I guess I found the solution for my issue, that is to dispose the tensor after tf.browser.toPixels()...
await tf.browser.toPixels(y, c);
tf.dispose(y);
Hey, I am glad my work helps (although I haven't touched this repo for a few years 😜), and looking forward to your future works.
The only gotcha is that the model weights are hosted on google cloud storage in my bucket. It's okay for now, but if you encounter huge traffic please consider having a separate place to host weights.
Thank you very much. Do you want to be mentioned in the code or Github?
Yes, I am happy to have this code cited.
The only gotcha is that the model weights are hosted on google cloud storage in my bucket.
Thanks. I understand and have not thought about that (so first time I load the page with resnet I need to download 252 MB from the server?). Is it possible to download the weights one time, for hosting them elsewhere (it would also be possible to load them locally with Emscripten). Maybe I should also switch to the smaller model because of that (but the quality of the big ones is quite good)... And I guess, the download size does not depend on the length of the animation, but only on the size of the model?
Would it be fine for you, if I host your weights on my huggingface account (just opened one for another experiment)? https://huggingface.co/Jona0123456789 And another question: Would it be possible to quantize those weights (it was possible to reduce the size of another model to 25% with --quantize_uint8)?