OOM Error when use DeepFaceLab_NVIDIA_build_12_22_2020 to extract faces in GTX1650Ti
Hi there with the new version 12_22_2020 i got OOM errors while extract faces. Details:
- GPU: GTX1650Ti
- VRAM: 4G
- SRAM:16G
- cuda_11.1.0_456.43_win10
Error information: Choose one or several GPU idxs (separated by comma). [CPU] : CPU [0] : GeForce GTX 1650 Ti [0] Which GPU indexes to choose? : 0 [wf] Face type ( f/wf/head ?:help ) : f f [0] Max number of faces from image ( ?:help ) : 0 [512] Image size ( 256-2048 ?:help ) : 256 256 [90] Jpeg quality ( 1-100 ?:help ) : 50 50 [n] Write debug images to aligned_debug? ( y/n ) : n Extracting faces... Running on GeForce GTX 1650 Ti 0%| | 0/548 [00:00<?, ?it/s] Error while processing data: Traceback (most recent call last): File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call return fn(*args) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn target_list, run_metadata) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[64,642,362] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node Pad_1}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Add_29/_4049]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[64,642,362] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node Pad_1}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations. 0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 71, in _subprocess_run result = self.process_data (data) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\mainscripts\Extractor.py", line 107, in process_data rects_extractor=self.rects_extractor, File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\mainscripts\Extractor.py", line 150, in rects_stage rects = data.rects = rects_extractor.extract (rotated_image, is_bgr=True) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\facelib\S3FDExtractor.py", line 193, in extract olist = self.model.run ([ input_image[None,...] ] ) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 167, in run return nn.tf_sess.run ( self.run_output, feed_dict=feed_dict) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run run_metadata_ptr) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run feed_dict_tensor, options, run_metadata) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run run_metadata) File "K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[64,642,362] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node Pad_1 (defined at K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\core\leras\layers\Conv2D.py:97) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Add_29/_4049]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[64,642,362] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node Pad_1 (defined at K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\core\leras\layers\Conv2D.py:97) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations. 0 derived errors ignored.
Errors may have originated from an input operation. Input Source operations connected to node Pad_1: Relu (defined at K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\facelib\S3FDExtractor.py:93)
Input Source operations connected to node Pad_1: Relu (defined at K:\主要文件\DeepFaceLab_NVIDIA_build_12_22_2020\DeepFaceLab_NVIDIA_internal\DeepFaceLab\facelib\S3FDExtractor.py:93)
Original stack trace for 'Pad_1':
File "
0%| | 0/548 [00:13<?, ?it/s]
Images found: 548 Faces detected: 0
Done. 请按任意键继续. . .
Have you found a solution to this problem?
i haven't found any solution about this problem!
I updated the CUDA and the video card driver and it helped me, also put a swap file on the drive to 32 gigabytes
hi zollex69, now i use the newest CUDA version 11.1 and video card driver is 456.43. could you please tell me which version works. thanks a lot.
CUDA version 10.1 worked for me
I have CUDA 12.0 but isnt working for me.