zero-administration-inference-with-aws-lambda-for-hugging-face icon indicating copy to clipboard operation
zero-administration-inference-with-aws-lambda-for-hugging-face copied to clipboard

Repeated Inferences with pipeline on lambda

Open ishwara-bhat opened this issue 3 years ago • 1 comments

Thanks for your response on Q&A question in other issue. With regard to multiple inferences, is there any precaution to take?

I was hoping that I just just call the model repeatedly in loop.

	import json
	from transformers import pipeline
	import requests
	question_answerer = pipeline("question-answering")
	
    def handler(event, context):
	    questionsetList['questionlist']
	    answerlist = []
	    for question in questionsetList:
		    answer = question_answerer({'question':question,'context':event['context']})
		    answerlist.push(answer)
            return jsonify({"Result": answerlist})

I got the following error on lambda test event. START RequestId: b06fd2cb-54df-4807-91c8-34ea7cfb614f Version: $LATEST OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k /usr/local/lib/python3.6/dist-packages/joblib/_multiprocessing_helpers.py:45: UserWarning: [Errno 38] Function not implemented. joblib will operate in serial mode warnings.warn('%s. joblib will operate in serial mode' % (e,)) questions before splitting by ? mark

  1. Why are you troubled?~ 2.Who is the person to blame? ~3. How long are you frustrated about this? Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/function/awslambdaric/main.py", line 20, in main(sys.argv) File "/function/awslambdaric/main.py", line 16, in main bootstrap.run(app_root, handler, lambda_runtime_api_addr) File "/function/awslambdaric/bootstrap.py", line 415, in run log_sink, File "/function/awslambdaric/bootstrap.py", line 171, in handle_event_request log_error(error_result, log_sink) File "/function/awslambdaric/bootstrap.py", line 122, in log_error log_sink.log_error(error_message_lines) File "/function/awslambdaric/bootstrap.py", line 306, in log_error sys.stdout.write(error_message) File "/function/awslambdaric/bootstrap.py", line 283, in write self.stream.write(msg) UnicodeEncodeError: 'ascii' codec can't encode characters in position 79-80: ordinal not in range(128) END RequestId: b06fd2cb-54df-4807-91c8-34ea7cfb614f REPORT RequestId: b06fd2cb-54df-4807-91c8-34ea7cfb614f Duration: 22056.43 ms Billed Duration: 22057 ms Memory Size: 8096 MB Max Memory Used: 962 MB RequestId: b06fd2cb-54df-4807-91c8-34ea7cfb614f Error: Runtime exited with error: exit status 1 Runtime.ExitError

It appeared like I can not call the model in a loop. In other implementations without pipeline I had used model in a loop.

Please suggest if there is any specific precaution like clean up required before calling for second question.

Thanks in advance.

ishwara-bhat avatar Dec 13 '21 18:12 ishwara-bhat

I am having the same issue. It seems the problem is due to incorrect input formatting. I haven't resolved the issue yet. As soon as I find I'll update it here

IDL281 avatar Jul 27 '22 15:07 IDL281