onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

Importing onnxruntime on AWS Lambdas with ARM64 processor causes crash

Open glefundes opened this issue 3 years ago • 40 comments

Describe the bug I'm currently migrating a service deployed as a serverless function on AWS Lambda to the new ARM64 Graviton2 processor. Importing onnxruntime throws a cpuinfo error and crashes the code with the following messages:

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
--
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/onnxruntime/core/common/cpuid_info.cc:62 onnxruntime::CPUIDInfo::CPUIDInfo() Failed to initialize CPU info.

The files /sys/devices/system/cpu/possible and /sys/devices/system/cpu/present don't exist and apparently this causes the crash. Is this expected behaviour? I'm not sure how to proceed. Is onnxruntime currently not supported by Graviton2 processors? The contents of /proc/cpuinfo are as follows:


processor	: 0
--
BogoMIPS	: 243.75
Features	: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x3
CPU part	: 0xd0c
CPU revision	: 1
processor	: 1
BogoMIPS	: 243.75
Features	: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x3
CPU part	: 0xd0c
CPU revision	: 1

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux (AWS Lambda python runtime)
  • ONNX Runtime installed from (source or binary): binary (with pip)
  • ONNX Runtime version: 1.10.0
  • Python version: 3.8.5

glefundes avatar Dec 14 '21 18:12 glefundes

I'm also experiencing this issue with a similar setup (see "System information" below). The error message is below as well (the same as the OP). I can add more details if needed/helpful.

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what(): /onnxruntime_src/onnxruntime/core/common/cpuid_info.cc:62 onnxruntime::CPUIDInfo::CPUIDInfo() Failed to initialize CPU info.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux (AWS Lambda Python arm64 Docker container)
  • ONNX Runtime installed from (source or binary): binary (with pip)
  • ONNX Runtime version: 1.10.0
  • Python version: 3.9

jcreinhold avatar Jan 04 '22 19:01 jcreinhold

@chenfucn is this a known issue?

Should we handle cpuinfo failing more gracefully? If it's not critical to have the cpu info maybe logging and ignoring the error is an option.

skottmckay avatar Jan 05 '22 08:01 skottmckay

Thanks for the info. This is a surprise. Here we are actually leveraging pytorch cpuinfo. This library is used in both pytorch and tensorflow. Do you guys have knowledge about pytorch cpuinfo library facing similar issues?

Currently we are using cpuinfo lib to detect hybrid cores and SDOT UDOT instruction support. Ignoring cpuinfo failure means we lose these functionalities and will cause performance degradation. Especially with DOT instructions, the matrix multiplication can be multiple times slower if we don't use DOT instructions and fall back to neon cores.

I can implement a very crude DOT detection logic in case of cpuinfo failure. However the best solution should be cpuinfo library authors to fix this problem.

chenfucn avatar Jan 05 '22 16:01 chenfucn

@glefundes and @jcreinhold could you also file this issue to pytorch cpuinfo repo while I prepare a PR to get around this?

chenfucn avatar Jan 05 '22 17:01 chenfucn

Thanks for the fast response. I filed the issue on cpuinfo here: https://github.com/pytorch/cpuinfo/issues/76

Let me know if you need me to test anything.

jcreinhold avatar Jan 05 '22 17:01 jcreinhold

https://github.com/microsoft/onnxruntime/pull/10199

chenfucn avatar Jan 05 '22 17:01 chenfucn

Could I know if this issue has been resolved? I'm currently having the same problem.

workdd avatar Jan 14 '22 08:01 workdd

the above PR already merged, can you try it out?

chenfucn avatar Jan 19 '22 19:01 chenfucn

Thanks for response. Did it published 'pip' also? I installed onnxruntime packages using just below command. pip install onnxruntime And I'm still faced same issue.

workdd avatar Feb 07 '22 06:02 workdd

It would be in the nightly package until the next official release. https://test.pypi.org/project/ort-nightly/

skottmckay avatar Feb 07 '22 07:02 skottmckay

Thank you for the fast response. Then I'll wait next official release.

workdd avatar Feb 08 '22 03:02 workdd

Thanks for the quick response to this issue. I'm happy to test out the implementation when there is a release candidate, but I've already deployed the model on x86 hardware and want as little downtime as possible.

Will PR #10199 fix what @chenfucn brought up in the below comment?

Currently we are using cpuinfo lib to detect hybrid cores and SDOT UDOT instruction support. Ignoring cpuinfo failure means we lose these functionalities and will cause performance degradation. Especially with DOT instructions, the matrix multiplication can be multiple times slower if we don't use DOT instructions and fall back to neon cores.

Or does https://github.com/pytorch/cpuinfo/issues/76 need to be resolved to fix that problem?

jcreinhold avatar Feb 11 '22 14:02 jcreinhold

You need to include both #10199 and #10334 .

yufenglee avatar Feb 11 '22 16:02 yufenglee

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

stale[bot] avatar Apr 16 '22 05:04 stale[bot]

Just got the chance to test release 1.11.1 on Graviton2 instances on AWS and can confirm that while the cpuinfo erro messages still show, execution is no longer halted and the lambda call finishes as expected. Thank you all :)

glefundes avatar May 25 '22 13:05 glefundes

Good afternoon, we are suddenly getting this error for 1.14 on Graviton2. I'm not sure if there has been a regression ?

jcampbell05 avatar Mar 13 '23 17:03 jcampbell05

@jcampbell05 There has been no change that I can see to print a warning instead of failing with an exception. What error exactly are you seeing?

https://github.com/microsoft/onnxruntime/blob/538d64891ac8e43c1faf7846635c3a1bf7b6b6c5/onnxruntime/core/common/cpuid_info.cc#L136

skottmckay avatar Mar 14 '23 04:03 skottmckay

So I'm seeing the following from Python. rolling back to 1.11.1 fixes it for us.

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
--
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

jcampbell05 avatar Mar 14 '23 09:03 jcampbell05

There's no exception thrown in the latest code so the failure is most likely coming from somewhere else. Problem is there's no default logger so the real error isn't clear. The Environment needs to be created prior to calling into other ORT code as that provides the default logger. However it's weird that that hasn't happened if you're calling from python as we typically create the environment internally so that it's available when needed.

Can you share the python code using ORT up to where it breaks?

skottmckay avatar Mar 15 '23 04:03 skottmckay

@skottmckay it took a while to track it down but it appears it's simply just this, since none of our other code has executed yet.

import onnxruntime

jcampbell05 avatar Apr 20 '23 11:04 jcampbell05

TLDR: I don't see any Solution to this issue to using ONNX in AWS Lambda? Docker Image builds and runs fine locally on my M1 mac but in the cloud this happens:

Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.

Pls Help.. Really need to run inference in AWS Lambda 🥲

DoctorSlimm avatar Apr 27 '23 05:04 DoctorSlimm

@DoctorSlimm, @jcampbell05

Could you try the following package (built with https://github.com/microsoft/onnxruntime/pull/15661) to see whether the issue is resolved? You can rename the .zip file to .whl file and install like the following:

mv ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl

pip uninstall onnxruntime

pip install ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl

ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip

tianleiwu avatar Apr 27 '23 18:04 tianleiwu

@tianleiwu

Still the same Error when I run it in the cloud, works totally fine when I run the function locally, but fails when I invoke it in AWS.

NOTE: I am building it locally on a M1 Mac and then pushing it to ECR registry

Local Build command, run in the same directory as the other files

docker build --platform linux/arm64 -t FUNCTION-NAME .

Here is my dockerfile:

FROM public.ecr.aws/lambda/python:3.9-arm64 AS model


# Install the runtime interface client
RUN python3.9 -m pip install --target . awslambdaric
RUN python3.9 -m pip install python-dotenv onnxruntime "transformers[torch]"

# https://github.com/microsoft/onnxruntime/issues/10038
ADD ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip
RUN mv ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.zip ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
RUN python3.9 -m pip uninstall -y onnxruntime
RUN python3.9 -m pip install ort_nightly-1.15.0.dev20230427003-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl

# Set Production Environment
ENV ENV=prod

# Copy files
COPY app.py ./

# Copy onnx directory
COPY onnx onnx


# Set Up Entrypoints
COPY ./entry_script.sh /entry_script.sh
ADD aws-lambda-rie-arm64 /usr/local/bin/aws-lambda-rie-arm64
ENTRYPOINT ["/entry_script.sh"]
CMD [ "app.handler" ]

app.py

import json
import traceback
from time import time
import numpy as np
from dotenv import load_dotenv
from onnxruntime import InferenceSession
from transformers import AutoTokenizer

load_dotenv()

# Worth Investigating
# https://blog.ml6.eu/the-art-of-pooling-embeddings-c56575114cf8
# https://github.com/UKPLab/sentence-transformers/issues/46#issuecomment-1152816277

tokenizer = AutoTokenizer.from_pretrained('onnx')
session = InferenceSession("onnx/model.onnx")


def lambda_handler(event, context):
    try:
        if 'ping' in event:
            print('Pinging')
            t0 = time()
            return {
                'total_time': time() - t0,
            }
        if 'modelInputs' in event:
            print('Inference\n')
            model_inputs = event['modelInputs']
            text = model_inputs['text']
            encoded_inputs = tokenizer(text, return_tensors="np")
            model_outputs = session.run(
                None, input_feed=dict(encoded_inputs)
            )  # (1, 1, 11, 768)

            token_embeddings = model_outputs[0]  # (1, 11, 768)
            special_token_ids = [
                tokenizer.cls_token_id,
                tokenizer.unk_token_id,
                tokenizer.sep_token_id,
                tokenizer.pad_token_id,
                tokenizer.mask_token_id,
            ]

            # Mask to exclude special tokens from pooling calculation
            mask = np.ones(token_embeddings.shape[:-1], dtype=bool)

            # Max Pooling Sentence Embedding
            for special_token_id in special_token_ids:
                mask &= encoded_inputs != special_token_id
            max_pooled_embeddings = np.max(token_embeddings * mask[..., np.newaxis], axis=1)
            max_pooled_embeddings = np.mean(max_pooled_embeddings, axis=0)

            # Mean Pooling Sentence Embedding
            for special_token_id in special_token_ids:
                mask &= encoded_inputs != special_token_id  # Exclude special tokens from mask
            mean_pooled_embeddings = np.sum(token_embeddings * mask[..., np.newaxis], axis=1)  # Apply mask and take sum over sequence dimension
            mean_pooled_embeddings = np.mean(mean_pooled_embeddings, axis=0)  # Take mean over batch dimension

            return {
                'statusCode': 200,
                'body': json.dumps(
                    {
                        'modelOutputs': {
                            # 'raw': model_outputs.tolist(),
                            'token_embeddings': token_embeddings.tolist(),
                            'max_pooled_embeddings': max_pooled_embeddings.tolist(),
                            'mean_pooled_embeddings': mean_pooled_embeddings.tolist(),
                        }
                    }
                )
            }

    except Exception as e:
        return {
            'error': str(traceback.format_exc()) + str(e)
        }

Response when run in AWS

{
  "errorType": "Runtime.ExitError",
  "errorMessage": "RequestId: dd954162-257e-448e-824e-0b78342f503a Error: Runtime exited with error: signal: aborted"
}

Log output

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.
Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors
terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
what():  /onnxruntime_src/include/onnxruntime/core/common/logging/logging.h:294 static const onnxruntime::logging::Logger& onnxruntime::logging::LoggingManager::DefaultLogger() Attempt to use DefaultLogger but none has been registered.
START RequestId: dd954162-257e-448e-824e-0b78342f503a Version: $LATEST
RequestId: dd954162-257e-448e-824e-0b78342f503a Error: Runtime exited with error: signal: aborted
Runtime.ExitError
END RequestId: dd954162-257e-448e-824e-0b78342f503a
REPORT RequestId: dd954162-257e-448e-824e-0b78342f503a	Duration: 2625.36 ms	Billed Duration: 2626 ms	Memory Size: 128 MB	Max Memory Used: 39 MB	

DoctorSlimm avatar Apr 28 '23 22:04 DoctorSlimm

@DoctorSlimm is there any update on your solution? I tried out your method of installing from nightly builds and it still leads to the same error:

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors

Note: I'm also using aws lambda with ARM architecture

johnsonchau-bulb avatar Jun 26 '23 11:06 johnsonchau-bulb

@DoctorSlimm is there any update on your solution? I tried out your method of installing from nightly builds and it still leads to the same error:


Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible

Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present

Error in cpuinfo: failed to parse both lists of possible and present processors

Note: I'm also using aws lambda with ARM architecture

Hello my dude! Using X86 (or whatever the OTHER architecture is, maybe it's called AMD64) Architecture (+ maybe a few other tweaks including increasing the memory of the function to at least a few GB) I think solved it!

Will be getting back into this stuff later this week so will likely have more concrete answers then but for now I'm pretty sure that

Using X86 and increasing Memory gets you 97% of the way there - Good Luck!

DoctorSlimm avatar Jun 26 '23 12:06 DoctorSlimm

@DoctorSlimm I see, I was experimenting with x86 architecture but the docker buildx took incredibly long. I'm also on a M1 mac, which I saw you are also on. Will keep trying this x86 method out! Thank you +++++

johnsonchau-bulb avatar Jun 26 '23 13:06 johnsonchau-bulb

@johnsonchau-bulb It's likely that AWS lambda ARM does not populate CPU info into the "/sys" folder. So essentially onnxruntime is trying to read a nonexistent file and directory.

The following test confirms this:

# Script to test the existence of folders 
import os
print(os.listdir('/'))
print(os.listdir('/sys'))
print(os.listdir('/sys/devices'))

Result - "/sys" has no content:

['bin', 'boot', 'dev', 'etc', 'home', 'lib', 'media', 'mnt', 'opt', 'proc', 'root', 'run', 'sbin', 'srv', 'sys', 'tmp', 'usr', 'var']
[]
[ERROR] FileNotFoundError: [Errno 2] No such file or directory: '/sys/devices'

MengLinMaker avatar Sep 13 '23 09:09 MengLinMaker

@MengLinMaker thanks!

As a sidenote, I would not recommend deploying hugging face models in AWS lambda as it takes a long time to download models. Furthermore, even when using EFS to connect to lambda to cache the model, the read/write speeds are not fast enough to load LLMs in lambda quickly. Leaving this here to help anyone who wants to build an AI API microservice.

johnsonchau-bulb avatar Sep 13 '23 13:09 johnsonchau-bulb

@chenfucn, Referencing your PR #10199: I located the file reader code in pytorch/cpuinfo that may be causing the file read issues for AWS lambda ARM64.

My AWS lambda directory probing tests confirm that these files do not exist, so read attempts lead to error:

  • /sys/devices/system/cpu/possible
  • /sys/devices/system/cpu/present

I also agree that the fix should be made in pytorch/cpuinfo as this is a cleaner solution. Looking at the coding, failing should return a null pointer.

Actually, it may be this logger in python/cpuinfo that's throwing the exception.

MengLinMaker avatar Sep 13 '23 14:09 MengLinMaker

As a sidenote, I would not recommend deploying hugging face models in AWS lambda as it takes a long time to download models. Furthermore, even when using EFS to connect to lambda to cache the model, the read/write speeds are not fast enough to load LLMs in lambda quickly. Leaving this here to help anyone who wants to build an AI API microservice.

@johnsonchau-bulb Thanks, almost dove down that rabbit hole.

Currently trying to decrease an 1.8GB docker image to 1.1GB by replacing pytorch with onnxruntime. My model is around 150mb. Lambda cold start times are horrible through, up to 20 secs. So I'm breaking the model into sections so I can cold start them at the same time.

MengLinMaker avatar Sep 13 '23 14:09 MengLinMaker