first-order-model icon indicating copy to clipboard operation
first-order-model copied to clipboard

how to change the demo.py for my own scenario

Open ElmiraNa opened this issue 2 years ago • 7 comments

Hi every one In my application checkpoint file and dataset.yaml are always the same. Also I want to use the same video for all the images. Actually just the image is the variable input in each run now, I think can use some of the results in all of my executions for different images, for example there is no need to create "generator, kp_detector, in load_checkpoint " or kp_driving in make_animation.... in every time of execution.

I'm not familiar with torch and other ML frameworks... How can I change the code and use the results of mentioned variables in all of my execution to save time consumption.

thank you in advance

ElmiraNa avatar Aug 08 '21 10:08 ElmiraNa

You can save all the models + kp_driving in memory.

For each source you could also save encoding of the source.

AliaksandrSiarohin avatar Aug 08 '21 11:08 AliaksandrSiarohin

thanks for your regard @AliaksandrSiarohin I extract kp_driving and it is a dictionary type, I load it as pickle file and when run it, the execution time has been reduced about 6 seconds, but there was no change in gpu consumption. Now please if you wouldn't mind, answer these question:

  1. You said: save all models.....I did not get the point. We always load 'config/vox-256.yaml' and 'vox-cpk.pth.tar' as checkpoint, what should I exactly do for save the model?

  2. In demo.py in line77 there is a code line: predictions.append(np.transpose(out['prediction'].data.cpu().numpy(), [0, 2, 3, 1])[0]) it takes about 16 sec to run. what exactly it does and is there a way to reduce its running time?

ElmiraNa avatar Aug 09 '21 11:08 ElmiraNa

@AliaksandrSiarohin I really need your help about above questions, please

ElmiraNa avatar Aug 10 '21 07:08 ElmiraNa

  1. I mean load the models only once for all videos, if you already do it than nvm.
  2. It saves the frame into a list in order to save it to the disk later.

AliaksandrSiarohin avatar Aug 10 '21 07:08 AliaksandrSiarohin

thank you so much for your quick answer. and excuse me if I bother you with my questions prediction.append.... transfer data from gpu to cpu? it takes too much time and is it any way to do it faster?

ElmiraNa avatar Aug 10 '21 07:08 ElmiraNa

.cpu() transfers data. You can try to create a pytorch tensor t and save all the frames there, then use io.mimsave('q.mp4', t.cpu().numpy()) maybe this will be faster.

AliaksandrSiarohin avatar Aug 10 '21 07:08 AliaksandrSiarohin

thanks again, i will inform you about result

ElmiraNa avatar Aug 10 '21 08:08 ElmiraNa