Takuya Yashima
Takuya Yashima
Sorry for my late reply, you can get the (generated) video only by using `--only-generated` .
Online videos (from [Vimeo](https://vimeo.com/)) were used. In [this data downloading script](https://github.com/sony/nnabla-examples/blob/master/video-superresolution/tecogan/authors_scripts/dataPrepare.py#L49), you can check the details, such as what videos were chosen.
Thanks for using our Colab demo. Unfortunately the link is not working properly, so we don't know which cell you mention, but if you mention the cell for simple image...
Sorry for the late reply! You might have already found the solution, but hope this can help (very primitive way though). Should work on Colab (or they provide some convenient...
Hi, thanks for trying our demo. Regarding the 2nd question, > Also, how can it be used to generate with ted384 model and yaml? Thanks I suppose that you mean...
Here's one way for that. 0. Run the cells until where you download the pretrained weight. 1. Instead of uploading one single image, make directories for input and output images...
Also I found that currently the Colab demo does not work as is. `--checkpoint` is obsolete and that must be changed to `--trained_model_path` probably due to [this commit](https://github.com/sony/nnabla-examples/commit/124d22db74696ad2922a5b0d1fe37136a3d19f81).