BERT-BiLSTM-CRF-NER icon indicating copy to clipboard operation
BERT-BiLSTM-CRF-NER copied to clipboard

batch_size 无论设置多大,都会OOM是什么原因呢

Open ChChwang opened this issue 4 years ago • 3 comments

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[12928,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info

ChChwang avatar Jun 03 '20 07:06 ChChwang

请问这个问题要怎么解决呢?怎么就关闭了啊

xdbc avatar Jun 04 '20 02:06 xdbc

请问这个问题要怎么解决呢?怎么就关闭了啊

batch_size 在py文件里改不行,bert-base-ner-train 执行的时候,-batch_size 16 就可以了

ChChwang avatar Jun 04 '20 02:06 ChChwang

请问这个问题要怎么解决呢?怎么就关闭了啊

batch_size 在py文件里改不行,bert-base-ner-train 执行的时候,-batch_size 16 就可以了

请问如何在bert-base-ner-train 里面改呢,我现在用的是coda的虚拟环境,tf1.12。输入命令就错误按照博主的流程做的但是就是运行不了bert-base-ner-train

wqx9826 avatar Jul 30 '21 02:07 wqx9826