pytorch-coviar
pytorch-coviar copied to clipboard
Optical flow
Hi, thanks for your suggestions, I already can reproduce your code, and I have a problem with the fusion of optical flow.
I guess that your fusion to optical flow is that after training and testing optical flow BN-Inception network, you would take the softmax scores using optical flow, and gives it a weight like "wm, wi ”, then integrate with a compressed part, is that true?
Thank you again!
@JGyoung33 I want to execute the code inside the docker container, but there are some problems. Can you ask for advice?
@JGyoung33 I want to execute the code inside the docker container, but there are some problems. Can you ask for advice?
Hi, can you describe your problem in more detail? I am not familiar with docker container.
I run the code using vscode debugging mode, and it's very useful.
@JGyoung33 The problem of handling ffmpeg in these days has now been successful. But now the code executes to the following position in model.py and throws an exception.` def get_augmentation(self): if self._representation in ['mv', 'residual']: scales = [1, .875, .75] else: scales = [1, .875, .75, .66]
print('Augmentation scales:', scales)
return torchvision.transforms.Compose(
[GroupMultiScaleCrop(self._input_size, scales),
GroupRandomHorizontalFlip(is_mv=(self._representation == 'mv'))])`exception:
Can you help me analyze the reasons?
Hello,did you solve this problem? I was experiencing this problem during the first run of the code. Can you tell me the solution?Thank you very much!
Hello,did you solve this problem? I was experiencing this problem during the first run of the code. Can you tell me the solution?Thank you very much!
Yes, I use more GPUs to train.
@JGyoung33 Thank you very much for your reply. I would like to ask how to set the specific parameters? I also have the problem of "can not open input stream" when I run the code. I separately download the ffmpeg installed by the installation package ffmpeg, use the "configure" command to generate the include and lib folders under myffmpeg, and then ./reencode.sh to generate mpeg4_videos. Run the python train.py file after installing coviar. I have been running the code error all the time, I don’t know what went wrong.Thank you very much.
Hi @Songyyan ,
Can you please share the exact commands you used? I can see if I can recognize anything. Did you follow exactly all steps in https://github.com/chaoyuaw/pytorch-coviar/blob/master/GETTING_STARTED.md#install ? For example, setting $LD_LIBRARY_PATH etc. Also I'm wondering if you got any error messages during these steps.
Thanks!
@chaoyuaw ,First of all, I encountered the problem of "can not open the input stream", I downloaded the ffmpeg version again, and then completed the ffmpeg installation according to the steps in the article to read the input stream. After that, I encountered the problem that the "***stack smashing detected *** :python teminated " . I would like to ask what is the output level of this network. Is this problem caused by insufficient memory? How to set the parameters of the server? And how many servers do you have to run the code? Thank you very much!
Hi, thanks for your suggestions, I already can reproduce your code, and I have a problem with the fusion of optical flow.
I guess that your fusion to optical flow is that after training and testing optical flow BN-Inception network, you would take the softmax scores using optical flow, and gives it a weight like "wm, wi ”, then integrate with a compressed part, is that true?
Thank you again!
hello, could you please tell me that do I need to pre-train the model on ImageNet as mentioned in the original paper to get similar results as the paper? Cause currently I reproduce the code but got poor accuracy on I,mv and residual, I don't know how to deal with this, really thank you!