pytorch-vcii
pytorch-vcii copied to clipboard
throw errors in windows
when i run this command:- train.sh 2 on my git bash it produces following results:- Namespace(batch_size=16, bits=8, checkpoint_iters=10000, clip=0.5, decoder_fuse_ level=1, distance1=1, distance2=2, encoder_fuse_level=1, eval='data/eval', eval_ batch_size=1, eval_iters=4500, eval_mv='data/eval_mv', fuse_encoder=True, gamma= 0.5, gpus='0', iterations=10, load_iter=None, load_model_name=None, lr=0.00025, max_train_iters=100, model_dir='model', num_crops=2, out_dir='output', patch=64, save_codes=False, save_model_name='demo', save_out_img=True, schedule='50000,60 000,70000,80000,90000', shrink=2, stack=True, train='data/train', train_mv='data /train_mv', v_compress=True, warp=True)
Creating loader for data/train... 448 images loaded. distance=1/2 Loader for 448 images (28 batches) created. Encoder fuse level: 1 Decoder fuse level: 1 Namespace(batch_size=16, bits=8, checkpoint_iters=10000, clip=0.5, decoder_fuse_level=1, distance1=1, distance2=2, encoder_fuse_level=1, eval='data/eval', eval_batch_size=1, eval_iters=4500, eval_mv='data/eval_mv', fuse_encoder=True, gamma=0.5, gpus='0', iterations=10, load_iter=None, load_model_name=None, lr=0.00025, max_train_iters=100, model_dir='model', num_crops=2, out_dir='output', patch=64, save_codes=False, save_model_name='demo', save_out_img=True, schedule='50000,60000,70000,80000,90000', shrink=2, stack=True, train='data/train', train_mv='data/train_mv', v_compress=True, warp=True)
Creating loader for data/train...
448 images loaded.
distance=1/2
Loader for 448 images (28 batches) created.
Encoder fuse level: 1
Decoder fuse level: 1
Traceback (most recent call last):
File "
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
run_name="mp_main")
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 263, in run_path
return Popen(process_obj)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\reduction.py", line 60, in dump
pkg_name=pkg_name, script_name=fname)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 96, in _run_module_code
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "G:\video compression\pytorch-vcii-master\train.py", line 111, in
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I tried running train.py with all the necessary parameters like train,eval,distance1, distance2. but still no lead.
I met the same issue , here is the link FYI. https://github.com/pytorch/pytorch/issues/5858
I met the same issue , here is the link FYI. pytorch/pytorch#5858 @roychen1998
Hello, Thank a lot for your reply. I have carefully read that issue but still faces the same problem. I am trying to tun the UCF101_3DCNN file in https://github.com/HHTseng/video-classification/tree/master/Conv3D and I added
def run(): torch.multiprocessing.freeze_support() print('loop')
if name == 'main': run()
in the beginning of UCF101_3DCNN file. But I still have the same problem:
C:\Anaconda3\envs\pytorch1\python.exe D:/LSTM/study/video-classification-master/CRNN/UCF101_CRNN.py
loop
C:\Anaconda3\envs\pytorch1\lib\site-packages\sklearn\preprocessing_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique values.
If you want the future behaviour and silence this warning, you can specify "categories='auto'".
In case you used a LabelEncoder before this OneHotEncoder to convert the categories to integers, then you can now use the OneHotEncoder directly.
warnings.warn(msg, FutureWarning)
C:\Anaconda3\envs\pytorch1\lib\site-packages\sklearn\preprocessing_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique values.
If you want the future behaviour and silence this warning, you can specify "categories='auto'".
In case you used a LabelEncoder before this OneHotEncoder to convert the categories to integers, then you can now use the OneHotEncoder directly.
warnings.warn(msg, FutureWarning)
Traceback (most recent call last):
File "
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Process finished with exit code 1
It seems that the same problem cannot be solved by this method. Could you please kindly help solve this?
Hi, I had the same problem in Windows as well. I think this is caused by the forking mechanism by windows. So to fix that please add
if __name__ == '__main__':
in line 110 and move lines 111-225 into this indentation in train.py. So train.py should look like this:
if __name__ == '__main__':
while True:
for batch, (crops, ctx_frames, _) in enumerate(train_loader):
scheduler.step()
train_iter += 1
....
....
if train_iter > args.max_train_iters:
print('Training done.')
break
Hope this fixes the issue for you as well.