PyABSA
PyABSA copied to clipboard
Inference Speed
Hello! I'm using PyABSA in an application where I have to do aspect term extractation and polarity for about 3000 texts every 15 minutes. At the moment, I'm using an Nvidia L4, however, it still takes about 30 minutes to process all the texts. Is there any way to speed up the inference process?
Maybe you can use smaller modeling length (e.g., 80) and larger batch size (64 or 128). And you can try the fp16 precision using torch.cuda.amp.autocast().