BERT-E2E-ABSA icon indicating copy to clipboard operation
BERT-E2E-ABSA copied to clipboard

Inference Time

Open davidlenz opened this issue 4 years ago • 2 comments

Hi,

When performing inference on around 500k sentences, speed starts very fast, first 50k sentences are done in around 30 minutes on Titan V. However, afterwarts speed drops significantly, and it takes around 10 hours for the next 40k sentences, with an estimate of over 50h for the remaining sentences. I dont understand why this is.

davidlenz avatar Jun 02 '20 09:06 davidlenz

Sorry for the late reply.

I am also curious about this problem. Can you show me your log? Or post the code you use to perform inference so that I can verify if I encounter the same problem.

lixin4ever avatar Jun 04 '20 14:06 lixin4ever

Thanks!

example input (file called test.txt in the ./data/newspaper/ folder):

Amy Klobuchar's branded ice scraper.####Amy=O Klobuchar's=O branded=O ice=O scraper.=O

Code used for inference (am on windows): work.bat

set TASK_NAME="newspaper"
set ABSA_HOME="./bert_v3_cased"
set CUDA_VISIBLE_DEVICES=0
python work.py ^
	--absa_home %ABSA_HOME% ^
	--ckpt %ABSA_HOME%/checkpoint-1700 ^
	--model_type bert ^
	--data_dir ./data/%TASK_NAME% ^
	--task_name %TASK_NAME% ^
	--model_name_or_path bert-base-cased ^
	--cache_dir ./cache_ ^
	--max_seq_length 128 ^
	--tagging_schema BIEOS > results_WPB.txt

output cmd: I let it run for a while to get the second tqdm bar. It/s gradually declines during the process.

(transformers) H:\...\BERT-E2E-ABSA>work.bat
(transformers) H:\...\BERT-E2E-ABSA>set TASK_NAME="newspaper"
(transformers) H:\...\BERT-E2E-ABSA>set ABSA_HOME="./bert-linear-rest_total-finetune"
(transformers) H:\...\BERT-E2E-ABSA>set CUDA_VISIBLE_DEVICES=0
(transformers) H:\_doc\Master-Thesen\...\BERT-E2E-ABSA>python work.py         --absa_home "./bert-linear-rest_total-finetune"      --ckpt "./bert-linear-rest_total-finetune"/checkpoint-900       --model_type bert       --data_dir ./data/"newspaper"        --task_name "newspaper"         --model_name_or_path bert-base-cased    --cache_dir ./cache_         --max_seq_length 128    --tagging_schema BIEOS  1>results_WPB.txt
Evaluating:   1%|▋                                                          | 6086/479626 [03:40<6:06:43, 21.52it/s]
Evaluating:  19%|██████████████████▎                                         | 92720/479626 [3:57:39<30:39:12,  3.51it/s]

output results_WPB.txt:

Load checkpoint ./bert_v3_cased/checkpoint-1700/pytorch_model.bin...
cached_features_file: ./data/newspaper\cached_test_bert-base-cased_128_newspaper
test class count: [0. 0. 0.]
***** Running prediction *****
b'Input: Amy Klobuchar\'s branded ice scraper., output: ["Klobuchar"]: NEU'
...

davidlenz avatar Jun 04 '20 21:06 davidlenz