3050ti. 2 root error found
Starting. Press "Enter" to stop training and save model.
Trying to do the first iteration. If an error occurs, reduce the model parameters.
!!! Windows 10 users IMPORTANT notice. You should set this setting in order to work correctly. https://i.imgur.com/B7cmDCB.jpg !!! You are training the model from scratch. It is strongly recommended to use a pretrained model to speed up the training and improve the quality.
Error: 2 root error(s) found. (0) Resource exhausted: failed to allocate memory [[node mul_89 (defined at D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:64) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_8/concat/_103]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: failed to allocate memory [[node mul_89 (defined at D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:64) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations. 0 derived errors ignored.
Errors may have originated from an input operation. Input Source operations connected to node mul_89: src_dst_opt/vs_inter_AB/upscale1/conv1/weight_0/read (defined at D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:38)
Input Source operations connected to node mul_89: src_dst_opt/vs_inter_AB/upscale1/conv1/weight_0/read (defined at D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:38)
Original stack trace for 'mul_89': File "threading.py", line 884, in _bootstrap File "threading.py", line 916, in _bootstrap_inner File "threading.py", line 864, in run File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread debug=debug) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\models\ModelBase.py", line 193, in init self.on_initialize() File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 564, in on_initialize src_dst_loss_gv_op = self.src_dst_opt.get_update_op (nn.average_gv_list (gpu_G_loss_gvs)) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py", line 64, in get_update_op v_t = self.beta_2*vs + (1.0-self.beta_2) * tf.square(g-m_t) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1076, in _run_op return tensor_oper(a.value(), *args, **kwargs) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1400, in r_binary_op_wrapper return func(x, y, name=name) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1710, in _mul_dispatch return multiply(x, y, name=name) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper return target(*args, **kwargs) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 530, in multiply return gen_math_ops.mul(x, y, name) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6245, in mul "Mul", x=x, y=y, name=name) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper attrs=attr_protos, op_def=op_def) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal op_def=op_def) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in init self._traceback = tf_stack.extract_stack_for_node(self._c_op)
Traceback (most recent call last): File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call return fn(*args) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn target_list, run_metadata) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: failed to allocate memory [[{{node mul_89}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_8/concat/_103]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: failed to allocate memory [[{{node mul_89}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations. 0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\mainscripts\Trainer.py", line 129, in trainerThread iter, iter_time = model.train_one_iter() File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\models\ModelBase.py", line 474, in train_one_iter losses = self.onTrainOneIter() File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train self.target_dstm_em:target_dstm_em, File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run run_metadata_ptr) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run feed_dict_tensor, options, run_metadata) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run run_metadata) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call raise type(e)(node_def, op, message) # pylint: disable=no-value-for-parameter tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: failed to allocate memory [[node mul_89 (defined at D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:64) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_8/concat/_103]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: failed to allocate memory [[node mul_89 (defined at D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:64) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations. 0 derived errors ignored.
Errors may have originated from an input operation. Input Source operations connected to node mul_89: src_dst_opt/vs_inter_AB/upscale1/conv1/weight_0/read (defined at D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:38)
Input Source operations connected to node mul_89: src_dst_opt/vs_inter_AB/upscale1/conv1/weight_0/read (defined at D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:38)
Original stack trace for 'mul_89': File "threading.py", line 884, in _bootstrap File "threading.py", line 916, in _bootstrap_inner File "threading.py", line 864, in run File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread debug=debug) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\models\ModelBase.py", line 193, in init self.on_initialize() File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 564, in on_initialize src_dst_loss_gv_op = self.src_dst_opt.get_update_op (nn.average_gv_list (gpu_G_loss_gvs)) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py", line 64, in get_update_op v_t = self.beta_2*vs + (1.0-self.beta_2) * tf.square(g-m_t) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1076, in _run_op return tensor_oper(a.value(), *args, **kwargs) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1400, in r_binary_op_wrapper return func(x, y, name=name) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1710, in _mul_dispatch return multiply(x, y, name=name) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper return target(*args, **kwargs) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 530, in multiply return gen_math_ops.mul(x, y, name) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6245, in mul "Mul", x=x, y=y, name=name) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper attrs=attr_protos, op_def=op_def) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal op_def=op_def) File "D:\FACELAB\DeepFaceLab_NVIDIA_RTX3000_series_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in init self._traceback = tf_stack.extract_stack_for_node(self._c_op)