TurboTransformers icon indicating copy to clipboard operation
TurboTransformers copied to clipboard

Turbo Inference Slower than FastAI

Open Charul opened this issue 5 years ago • 1 comments

I am trying to use Turbo Transformer for inferencing on a trained BERT Transformers(Fastai with HuggingFace). I followed the steps mentioned under the section : 'How to customised your post-processing layers after BERT encoder' from the file' and customised the bert_for_sequence_classification_example.py.

It appears that the time taken for inferencing for Turbo is greater than Fast AI!

Here is the screenshot of the inferencing time for a simple Sentiment Prediction task for the below statement : '@AmericanAir @united contact me, or do something to alleviate this terrible, terrible service. But no, your 22 year old social media guru'

Screenshot 2020-08-12 at 11 22 50

In comparison to Fast AI :

Screenshot 2020-08-13 at 09 22 56

Has anyone experienced something similar? I might be missing out on something causing this result. Or would it only make sense to compare the timings on larger test data?

Charul avatar Aug 13 '20 07:08 Charul

Let me make sure you are using CPU for inference and your turbo version is 0.4.1. Generally, the first inference after runtime launched is very slow, you need to warm up the engine with one initial dummy inference.

feifeibear avatar Aug 13 '20 07:08 feifeibear