These errors are coming on master branch state of the densenet and
after setting batchnorm(training=False), both cases.
======================================================== ERRORS ========================================================
_____________________________ ERROR collecting ivy_models_tests/densenet/test_densenet.py ______________________________
ivy_models_tests/densenet/test_densenet.py:28: in
model = VARIANTSmodel_var
ivy_models/densenet/densenet.py:180: in densenet161
w_clean = load_torch_weights(
ivy_models/helpers/weights_helpers.py:144: in load_torch_weights
weights = torch.hub.load_state_dict_from_url(url, map_location=map_location)
/home/vscode/.local/lib/python3.8/site-packages/torch/hub.py:750: in load_state_dict_from_url
return torch.load(cached_file, map_location=map_location)
/home/vscode/.local/lib/python3.8/site-packages/torch/serialization.py:815: in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
/home/vscode/.local/lib/python3.8/site-packages/torch/serialization.py:1043: in _legacy_load
result = unpickler.load()
/home/vscode/.local/lib/python3.8/site-packages/torch/tensor.py:410: in setstate
self.set(*state)
E RuntimeError: Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cpu". This is no longer allowed; the devices must match.
=============================================== short test summary info ================================================
ERROR ivy_models_tests/densenet/test_densenet.py - RuntimeError: Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cpu". ...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================== 1 error in 10.26s
one more error on another run of tests
