Hello,
I am a deep learning newbie currently testing your model on a dataset as a proposal for an MS project but when I run it using the provided code, I get this:
/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:333: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
"Argument interpolation should be of type InterpolationMode instead of int. "
model [IRWGANModel] was created
Information
task: dibcoirwgan
phase: test
gan_type: lsgan
netD: gl
trainA_size: 93
trainB_size: 84
testA_size: 16
testB_size: 16
Weight
lambda_A: 10
lambda_B: 10
lambda_identity: 1
Model Specific
beta_mode: AB
threshold: 0.1
batch_size: 20
lambda_nos_A: 1
lambda_nos_B: 1
-------------- Networks loaded ----------------
[Network gen_a2b] Total number of parameters : 11.379 M
[Network gen_b2a] Total number of parameters : 11.379 M
[Network dis_a] Total number of parameters : 13.969 M
[Network dis_b] Total number of parameters : 13.969 M
[Network beta_net_a] Total number of parameters : 2.757 M
[Network beta_net_b] Total number of parameters : 2.757 M
[*] testing start!
Traceback (most recent call last):
File "main.py", line 117, in
test(model, opt, test_loader_a, test_loader_b)
File "main.py", line 38, in test
dict_a, dict_b = misc.test_fid(test_loader_a, model.gen_a2b, test_loader_b, model.gen_b2a, model.run_dir, opt)
File "/content/drive/MyDrive/Colab Notebooks/IrwGAN/models/misc.py", line 125, in test_fid
metric_dict_AB = torch_fidelity.calculate_metrics(input1=real_b_path, input2=fake_b_path, **eval_args)
File "/usr/local/lib/python3.7/dist-packages/torch_fidelity/metrics.py", line 239, in calculate_metrics
featuresdict_1 = extract_featuresdict_from_input_id_cached(1, feat_extractor, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch_fidelity/utils.py", line 372, in extract_featuresdict_from_input_id_cached
featuresdict = fn_recompute()
File "/usr/local/lib/python3.7/dist-packages/torch_fidelity/utils.py", line 360, in fn_recompute
return extract_featuresdict_from_input_id(input_id, feat_extractor, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch_fidelity/utils.py", line 342, in extract_featuresdict_from_input_id
input = prepare_input_from_id(input_id, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch_fidelity/utils.py", line 275, in prepare_input_from_id
return prepare_input_from_descriptor(input_desc, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch_fidelity/utils.py", line 230, in prepare_input_from_descriptor
vassert(len(input) > 0, f'No samples found in {input} with samples_find_deep={samples_find_deep}')
File "/usr/local/lib/python3.7/dist-packages/torch_fidelity/helpers.py", line 9, in vassert
raise ValueError(message)
ValueError: No samples found in [] with samples_find_deep=False
has anyone ever come across this?