C2F-FWN icon indicating copy to clipboard operation
C2F-FWN copied to clipboard

train_stage2 batchsize error

Open FrankMelon1 opened this issue 4 years ago • 0 comments

Hello! when I run train_2.sh and set the batchsize as 4, it comes errors as below:

Traceback (most recent call last): File "train_stage2.py", line 154, in train() File "train_stage2.py", line 61, in train fg_tps, fg_dense, lo_tps, lo_dense, flow_tps, flow_dense, flow_totalp, real_input_1, real_input_2, real_SFG, real_SFG_fullp, flow_total_last = ClothWarper(input_TParsing, input_TFG, input_SParsing, input_SFG, input_SFG_full, flow_total_prev_last) File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/workspace/cpfs-data/C2F-FWN-main/models/models.py", line 69, in forward outputs = self.model(*inputs, **kwargs, dummy_bs=self.pad_bs) File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 141, in forward return self.module(*inputs[0], **kwargs[0]) File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/workspace/cpfs-data/C2F-FWN-main/models/model_stage2.py", line 111, in forward warped_fg_tps, warped_fg_dense, warped_lo_tps, warped_lo_dense, flow_tps, flow_dense, flow_total = self.generate_frame_train(net, real_input_1, real_input_2, real_input_TFG, flow_total_prev, start_gpu, is_first_frame) File "/workspace/cpfs-data/C2F-FWN-main/models/model_stage2.py", line 142, in generate_frame_train = net.forward(real_input_1_reshaped, real_input_2_reshaped, real_input_3_reshaped, real_input_tfg_reshaped) File "/workspace/cpfs-data/C2F-FWN-main/models/networks.py", line 474, in forward feature_T_1 = self.model_down_target_1([feature_T_0, input_tlo_0, input_tlo_1]) File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/workspace/cpfs-data/C2F-FWN-main/models/dconv/modules/modulated_deform_conv.py", line 173, in forward sample_LO = torch.nn.functional.grid_sample(input_LO, sample_location, mode='bilinear', padding_mode='border') File "/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 2597, in grid_sample return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum) RuntimeError: grid_sampler(): expected grid and input to have same batch size, but got input with sizes [64, 3, 128,96] and grid with sizes [256, 64, 48, 2]

the train_2.sh are shown below:

python train_stage2.py --name clothwarp_256p
--dataroot /workspace/cpfs-data/data/ICCV2021/SoloDance/SoloDance_upload/train --dataset_mode cloth --model cloth --nThreads 16
--input_nc_T_2 4 --input_nc_S_2 3 --input_nc_P_2 10 --ngf 64 --n_downsample_warper 4 --label_nc_2 3 --grid_size 3
--resize_or_crop scaleHeight --loadSize 256 --random_drop_prob 0 --color_aug
--gpu_ids 0 --n_gpus_gen 1 --batchSize 4 --max_frames_per_gpu 12 --display_freq 40 --print_freq 40 --save_latest_freq 1000
--niter 5 --niter_decay 5 --n_scales_temporal 3 --n_frames_D 2
--no_first_img --n_frames_total 12 --max_t_step 4 --tf_log --continue

hope for your replay!

FrankMelon1 avatar Feb 23 '21 02:02 FrankMelon1