Zinuo Li

Results 17 comments of Zinuo Li

@h9419 Hi, I have the same question about improving the performance of `inference_video.py`. Though I use `tesla V100` to do the inference it still runs very very slow. I use...

@h9419 On a side note, the `batch size `is set to 8, but it still takes me 15 mins to do the inference.

@h9419 Thank you for replying to me so quickly! I'll try it as soon as possible!

Hi, @h9419 . As I mentioned before I used TESLA V100 to do the `video_inference` and it's slow too. Im not sure if I can solve this by simply using...

@h9419 At least we can know that hardware shouldn't be the limit of the inference for TESLA V100. There must be something to improve.

Hi, the result in table 5 was tested on 512 x 512 resolution. Here the `ORI: True` means you are testing on 2k resolution, where every method will not perform...

Hi, no worries, let me retrain here and get back to you.

Hi, I retrained on my laptop (3080ti instead of A6000) and got the similar result as paper. Performance may differ from different devices, but at least it's not that far....

For SD7K the batchsize is 4, others are the same, you can also refer to our wandb log. We released comprehensive log when training SD7K.

Hi @baicenxiao, we will get back to you after 5 May since my another member is on vacation right now. Let me discuss with him to see how we can...