问题描述 Please describe your issue
I'm training a custom dataset on the SETR model but it throws the same error though the GPU has enough memory. I've tried reducing batch size but, nothing seems to work. below is the complete error message.
Traceback (most recent call last):
File "train.py", line 232, in
main(args)
File "train.py", line 208, in main
train(
File "/home/usr/SETR/PaddleSeg/paddleseg/core/train.py", line 206, in train
logits_list = ddp_model(images) if nranks > 1 else model(images)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 930, in call
return self._dygraph_call_func(*inputs, **kwargs)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/home/usr/SETR/PaddleSeg/paddleseg/models/setr.py", line 93, in forward
feats, _shape = self.backbone(x)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 930, in call
return self._dygraph_call_func(*inputs, **kwargs)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/home/usr/SETR/PaddleSeg/paddleseg/models/backbones/vision_transformer.py", line 276, in forward
x = blk(x)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 930, in call
return self._dygraph_call_func(*inputs, **kwargs)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/home/usr/SETR/PaddleSeg/paddleseg/models/backbones/vision_transformer.py", line 119, in forward
x = x + self.drop_path(self.attn(self.norm1(x)))
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 930, in call
return self._dygraph_call_func(*inputs, **kwargs)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/home/usr/SETR/PaddleSeg/paddleseg/models/backbones/vision_transformer.py", line 77, in forward
attn = (q.matmul(k.transpose((0, 1, 3, 2)))) * self.scale
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/math_op_patch.py", line 217, in impl
return scalar_method(self, other_var)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/math_op_patch.py", line 197, in scalar_mul
return scalar_elementwise_op(var, value, 0.0)
File "/home/usr/.local/lib/python3.8/site-packages/paddle/fluid/dygraph/math_op_patch.py", line 124, in scalar_elementwise_op
return _C_ops.scale(var, 'scale', scale, 'bias', bias)
SystemError: (Fatal) Operator scale raises an paddle::memory::allocation::BadAlloc exception.
The exception content is
:ResourceExhaustedError:
Out of memory error on GPU 0. Cannot allocate 1.001954GB memory on GPU 0, 11.685364GB memory has been allocated and available memory is only 230.125000MB.
Please check whether there is any other process using GPU 0.
- If yes, please stop them, or start PaddlePaddle on another GPU.
- If no, please decrease the batch size of your model.
If the above ways do not solve the out of memory problem, you can try to use CUDA managed memory. The command is
export FLAGS_use_cuda_managed_memory=false.
(at /paddle/paddle/fluid/memory/allocation/cuda_allocator.cc:87)
. (at /paddle/paddle/fluid/imperative/tracer.cc:307)
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档、常见问题、历史Issue、AI社区来寻求解答。祝您生活愉快~
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day!
I am facing the same issue. I have 16GB GPUP memory and batch size=1
I met the same problems too! so what should I do to solve this problem?
Since you haven't replied for more than a year, we have closed this issue/pr.
If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up.
由于您超过一年未回复,我们将关闭这个issue/pr。
若问题未解决或有后续问题,请随时重新打开,我们会继续跟进。