Bert-Pytorch-Chinese-TextClassification
Bert-Pytorch-Chinese-TextClassification copied to clipboard
when I run the shell,hava a CUDA error to me. it will runing on a right status if i use cpu.
python3 run_classifier_word.py --task_name NEWS --do_train --do_eval --data_dir $GLUE_DIR/ --vocab_file $BERT_BASE_DIR/vocab.txt --bert_config_file $BERT_BASE_DIR/bert_config.json --init_checkpoint $BERT_BASE_DIR/pytorch_model.bin --max_seq_length 256 --train_batch_size 24 --learning_rate 2e-5 --num_train_epochs 50.0 --output_dir ./newsAll_output/ --local_rank 3 04/16/2019 16:33:35 - INFO - main - device cuda:3 n_gpu 1 distributed training True 04/16/2019 16:33:35 - INFO - main - LOOKING AT /home/gpu0/Litao/Bert/Bert-Pytorch-Chinese-TextClassification/Corpus/train.tsv label_list.size:10
Traceback (most recent call last):
File "run_classifier_word.py", line 704, in
@xieyufei1993 I guess this question because my GPU is too small to run the shell.Can you tell me about you GPU's memory?
@WavesLi I guess you only have one GPU in your PC, and that the parameter local_rank is 3 is not for you. So setting local_rank default which is -1 will work.
@xieyufei1993 I guess this question because my GPU is too small to run the shell.Can you tell me about you GPU's memory?
My GPU is 2080 ti 11GB, but it still out of memory...
You need a smaller batch.
---Original--- From: "Bruce Zhu"[email protected] Date: Tue, Jul 23, 2019 07:05 AM To: "xieyufei1993/Bert-Pytorch-Chinese-TextClassification"[email protected]; Cc: "Mention"[email protected];"WavesLi"[email protected]; Subject: Re: [xieyufei1993/Bert-Pytorch-Chinese-TextClassification] when I run the shell,hava a CUDA error to me. it will runing on a right status if i use cpu. (#2)
@xieyufei1993 I guess this question because my GPU is too small to run the shell.Can you tell me about you GPU's memory?
My GPU is 2080 ti 11GB, but it still out of memory...
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.