Lung-Segmentation icon indicating copy to clipboard operation
Lung-Segmentation copied to clipboard

cp.pth

Open vcvishal opened this issue 6 years ago • 3 comments

where to find cp.pth?

File "train_model.py", line 143, in train() File "train_model.py", line 74, in train net.load_state_dict(torch.load('cp.pth')) File "D:\miniconda\lib\site-packages\torch\serialization.py", line 382, in load f = open(f, 'rb') FileNotFoundError: [Errno 2] No such file or directory: 'cp.pth'

vcvishal avatar Jul 02 '19 10:07 vcvishal

cp.pth are the weights saved from previous training cycles. If you are starting afresh, either

  1. download the weights of the pre-trained network (for transfer learning) OR
  2. comment out this line and initialise the weights using your preferred initialisation technique

SConsul avatar Jul 02 '19 11:07 SConsul

Thank you for your reply always out of memory

Traceback (most recent call last): File "train_model_segnet.py", line 267, in train() File "train_model_segnet.py", line 210, in train outputs = net(images) File "C:\Users\vcvis\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "C:\Users\vcvis\Desktop\Lung-Segmentation-master\VGG_UNet\code\build_model.py", line 117, in forward x42d = F.relu(self.bn42d(self.conv42d(x43d))) File "C:\Users\vcvis\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "C:\Users\vcvis\Miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 4.00 GiB total capacity; 2.85 GiB already allocated; 16.80 MiB free; 70.66 MiB cached)

please help thank you

vcvishal avatar Jul 02 '19 17:07 vcvishal

please guide even with 2 images i found this

D:\useful in other way\Lung-Segmentation-master\VGG_UNet\code>python train_model_Unet.py preparing training data ... done ... 0%| | 0/75 [00:00<?, ?it/s] 0it [00:00, ?it/s]Traceback (most recent call last): File "train_model_Unet.py", line 267, in train() File "train_model_Unet.py", line 210, in train outputs = net(images) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "D:\useful in other way\Lung-Segmentation-master\VGG_UNet\code\build_model.py", line 170, in forward x = self.up3(x, x2) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "D:\useful in other way\Lung-Segmentation-master\VGG_UNet\code\unet_parts.py", line 71, in forward x = self.conv(x) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "D:\useful in other way\Lung-Segmentation-master\VGG_UNet\code\unet_parts.py", line 24, in forward x = self.conv(x) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\container.py", line 92, in forward input = module(input) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:62] data. DefaultCPUAllocator: not enough memory: you tried to allocate %dGB. Buy new RAM!2

D:\useful in other way\Lung-Segmentation-master\VGG_UNet\code>python train_model_Unet.py preparing training data ... done ... 0%| | 0/75 [00:00<?, ?it/s] 0it [00:00, ?it/s]Traceback (most recent call last): File "train_model_Unet.py", line 267, in train() File "train_model_Unet.py", line 210, in train outputs = net(images) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "D:\useful in other way\Lung-Segmentation-master\VGG_UNet\code\build_model.py", line 170, in forward x = self.up3(x, x2) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "D:\useful in other way\Lung-Segmentation-master\VGG_UNet\code\unet_parts.py", line 71, in forward x = self.conv(x) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "D:\useful in other way\Lung-Segmentation-master\VGG_UNet\code\unet_parts.py", line 24, in forward x = self.conv(x) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\container.py", line 92, in forward input = module(input) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, **kwargs) File "C:\Users\vcvis\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:62] data. DefaultCPUAllocator: not enough memory: you tried to allocate %dGB. Buy new RAM!2

thank you

vcvishal avatar Jul 04 '19 07:07 vcvishal