Changing the batch_size in test_tryon doesn't work
Thank you for the wonderful work. I tried changing the batch_size in test_tryon.py during the inference, but it didn't have any effect for my result. I use only one GPU(1080ti) and run on Ubuntu20.04 LTS , so i made a few change for the code.
1, delete distributed function
torch.distributed.init_process_group( 'nccl', init_method='env://' )
2,
train_sampler = DistributedSampler(train_data)
train_loader = DataLoader(train_data, batch_size=opt.batchSize, shuffle=False,
num_workers=4, pin_memory=True, sampler=train_sampler)
change to
train_loader = DataLoader(train_data, batch_size=opt.batchSize, shuffle=False,
num_workers=4, pin_memory=True)
3,
gen_model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(gen_model).to(device)
if opt.isTrain and len(opt.gpu_ids):
model_gen = torch.nn.parallel.DistributedDataParallel(gen_model, device_ids=[opt.local_rank])
change to
model_gen = gen_model.to(device)
Here is my .sh file The result remains the same regardless of whether I change the batch size to 1 or 64.
python3 test_tryon_v2.py \
--name test_gpvtongen_vitonhd_unpaired_1109 \
--resize_or_crop None --verbose --tf_log \
--batchSize 64 \
--num_gpus 1 --label_nc 14 --launcher pytorch \
--PBAFN_gen_checkpoint 'checkpoints/gp-vton_gen_vitonhd_wskin_wgan_lrarms_1029/PBAFN_gen_epoch_201.pth' \
--dataroot ./dataset\
--image_pairs_txt test_pairs_unpaired_1018.txt \
--warproot ./sample/test_partflow_vitonhd_unpaired_1109
However, when I change
gen_model.train() to gen_model.eval(),
the result remains the same regardless of how I modify the batch size.
"I think the problem might be that the batch size wasn't properly changed. Can someone help me resolve this issue? Thanks~