Super-SloMo
Super-SloMo copied to clipboard
Training
Training Related Questions: For training, is there a way it can be made faster using multiple GPU's.
Right now it seems that 200 epochs take a lot of time; so parallelization on GPU's may help. Also, have any tests been done to see if 100 epochs are good enough etc.; and are any training check-points available for other types of content such as animation; screen sharing content etc. in addition to adobe dataset.