NumNet icon indicating copy to clipboard operation
NumNet copied to clipboard

why is inference so slowly?

Open TingFree opened this issue 4 years ago • 2 comments

when I run your code in train status, 70 mins / epoch, but it cost 30 mins to get 1/5976 in the reference status , what should I do to speed up it? I have 5976 passage to be predicted.

my inference command: python predict.py --include-package numnet --archive_file ./out/model.tar.gz --input_file ./data/temp.for_infer.json --output_file ./predictions.json

result: 0%| | 1/5976 [19:01<1895:20:52, 1141.97s/it]

TingFree avatar Mar 16 '20 00:03 TingFree

I meet the same problem too.

WSTC2007 avatar Jun 07 '20 11:06 WSTC2007

I recommend that you should check 'CPU' usage. Maybe your archive model did not consider 'GPU' device

Check your predict.py and simply change archive = load_archive(args.archive_file) to archive = load_archive(args.archive_file, cuda_device = torch.cuda.current_device())

J-Seo avatar Nov 29 '20 19:11 J-Seo