STEPS
STEPS copied to clipboard
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:75] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9437184 bytes. Buy new RAM!
Hello, when I run test_ns.sh, he reports an error when running the test_nuscenes_disp.py file. The error message is as follows:
……
File "D:\software\anaconda3\envs\STEPS\lib\site-packages\mmcv\utils\registry.py", line 234, in build
return self.build_func(*args, **kwargs, registry=self)
File "D:\Projects\Depth Estimation\STEPS\models\registry.py", line 6, in _build_func
return registry.get(name)(option)
File "D:\Projects\Depth Estimation\STEPS\models\rnw.py", line 88, in __init__
self.opt.day_check_point
File "D:\Projects\Depth Estimation\STEPS\models\rnw.py", line 26, in build_disp_net
model: pytorch_lightning.LightningModule = MODELS.build(name=option.model.name, option=option)
File "D:\software\anaconda3\envs\STEPS\lib\site-packages\mmcv\utils\registry.py", line 234, in build
return self.build_func(*args, **kwargs, registry=self)
File "D:\Projects\Depth Estimation\STEPS\models\registry.py", line 6, in _build_func
return registry.get(name)(option)
File "D:\Projects\Depth Estimation\STEPS\models\rnw.py", line 88, in __init__
self.opt.day_check_point
File "D:\Projects\Depth Estimation\STEPS\models\rnw.py", line 26, in build_disp_net
model: pytorch_lightning.LightningModule = MODELS.build(name=option.model.name, option=option)
File "D:\software\anaconda3\envs\STEPS\lib\site-packages\mmcv\utils\registry.py", line 234, in build
return self.build_func(*args, **kwargs, registry=self)
File "D:\Projects\Depth Estimation\STEPS\models\registry.py", line 6, in _build_func
return registry.get(name)(option)
File "D:\Projects\Depth Estimation\STEPS\models\rnw.py", line 63, in __init__
self.G = DispNet(self.opt)
File "D:\Projects\Depth Estimation\STEPS\models\disp_net.py", line 18, in __init__
self.DepthEncoder = DispEncoder(self.opt.depth_num_layers, pre_trained=True)
File "D:\Projects\Depth Estimation\STEPS\models\disp_encoder.py", line 63, in __init__
backbone = build_backbone(num_layers, pre_trained)
File "D:\Projects\Depth Estimation\STEPS\models\disp_encoder.py", line 51, in build_backbone
loaded = load_pretrained_weights('resnet{}'.format(num_layers), map_location='cpu')
File "D:\Projects\Depth Estimation\STEPS\components\resnet_backbone.py", line 17, in load_pretrained_weights
state_dict = torch.load(_MODEL_URLS[name], map_location=map_location)
File "D:\software\anaconda3\envs\STEPS\lib\site-packages\torch\serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "D:\software\anaconda3\envs\STEPS\lib\site-packages\torch\serialization.py", line 747, in _legacy_load
return legacy_load(f)
File "D:\software\anaconda3\envs\STEPS\lib\site-packages\torch\serialization.py", line 678, in legacy_load
obj = storage_type._new_with_file(f)
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:75] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9437184 bytes. Buy new RAM!
How can I fix this?
For more, I attach my test_ns.sh file here:
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=7 python test_nuscenes_disp.py night steps_ns /best/ns_denoise_best.ckpt --test 1
cd evaluation
python eval_nuscenes.py night
cd ..
done