aws-lambda-web-adapter
aws-lambda-web-adapter copied to clipboard
Error { kind: SendRequest, source: Some(hyper::Error(IncompleteMessage))
Hi, I'm facing some issues using the adapter with a Python Lambda function packaged as Docker Image using FastAPI framework. Lambda code functionality is implemented using mainly async functions and when deploying the lambda and invoking a async endpoint, the next error is shown:
{ "errorType": "&alloc::boxed::Box<dyn core::error::Error + core::marker::Send + core::marker::Sync>", "errorMessage": "client error (SendRequest)" }
This does not happen for endpoints that use only sync operations. I get this response testing my lambda from AWS console using AWS_LWA_PASS_THROUGH_PATH variable for non http-triggers
And enabling RUST_LOG=debug, I just see this on logs:
ERROR Lambda runtime invoke{requestId="52dc7925-67d8-4682-9e83-f781e93ae4da" xrayTraceId="Root=1-65fa2420-49e792476009db305da67ab2;Parent=7106713d10b989f9;Sampled=0;Lineage=19427698:0"}: lambda_runtime: Error { kind: SendRequest, source: Some(hyper::Error(IncompleteMessage)) }
I'm running it behind a Lambda Function URL and I get the same logs using the URL. When I run this locally using sam local start-api
it works just fine. Any idea on how to troubleshoot this correctly? Thanks in advance.
Could you share minimum code that can reproduce this issue? And what is the trigger for the Lambda function?
Thanks for looking into this...I'm using Nemoguardrails from a Lambda. The function is invoked using a Function URL. Here is a sample code on how it works:
import json
from fastapi import FastAPI, HTTPException, Request
from nemoguardrails import RailsConfig, LLMRails
app = FastAPI()
YAML_CONFIG = """
models:
- type: main
engine: openai
model: gpt-4
rails:
input:
flows:
- self check input
- allow input
prompts:
- task: self_check_input
content: |
Your task is to check if the user message below complies with safety policies.
Policy for the user messages:
- should not contain word TEST
User message: "{{ user_input }}"
Question: Should the user message be blocked (Yes or No)?
Answer:
"""
COLANG_CONTENT="""
define bot allow
"ALLOW"
define subflow allow input
bot allow
stop
define bot refuse to respond
"DENY"
"""
@app.get("/")
def get_root():
return {"message": "FastAPI running in a Lambda function"}
@app.post("/run")
async def execute(request: Request):
raw_body = await request.body()
body_str = raw_body.decode('utf-8')
payload = json.loads(body_str)
config = RailsConfig.from_content(yaml_content=YAML_CONFIG,colang_content=COLANG_CONTENT)
rails = LLMRails(config,verbose=True)
message_content = payload["input"]
# Generate response from guardrails service
output = await rails.generate_async( messages=[{
"role": "user",
"content": message_content
}])
if output.get("error"):
print(f"Error: {output.get('error')}")
raise HTTPException(status_code=500, detail=output)
return output
/run endpoint expects a simple JSON object like this:
{"input":"Some test text"}
This is the Dockerfile I'm using to build the image. It is adapted from nemoguardrails docs to use AWS Lambda adapter:
FROM public.ecr.aws/docker/library/python:3.10-slim
# Copy the Lambda Adapter from the public ECR
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.8.1 /lambda-adapter /opt/extensions/lambda-adapter
# Install OS dependencies
RUN apt-get update && apt-get upgrade && apt-get install -y gcc g++ make cmake
RUN mkdir -p /myapp
COPY requirements.txt /myapp
RUN pip install --upgrade pip && pip install -r /myapp/requirements.txt --no-cache-dir --target /myapp
COPY sample.py /myapp
WORKDIR /myapp
RUN python -c "from fastembed.embedding import TextEmbedding; TextEmbedding('sentence-transformers/all-MiniLM-L6-v2');"
# Make port 8000 available to the world outside this container
ENV PORT=8000
# Start nemoguardrails service
# FastAPI
CMD exec python -m uvicorn --port=$PORT sample:app --host=0.0.0.0
Hi, I'm facing the same issue. I have a FastAPI App using the lambda adapter to trigger it via API Gateway. My POST requests work perfectly fine. The GET route, however, throws an internal server error and the lambda logs show what @sebasjuancho mentioned:
ERROR Lambda runtime invoke{requestId="c39861b5-955a-447c-91e8-319222d9caad" xrayTraceId="Root=1-663756d6-69e1fdbd321ff6c230681174;Parent=09faa06e7ad0673c;Sampled=0;Lineage=5726a561:0"}: lambda_runtime: Error { kind: SendRequest, source: Some(hyper::Error(IncompleteMessage)) }
Not sure if it makes a difference: my GET route is the only one taking request params...
Did anybody find a workaround or solution for this?
IncompleteMessage
errors usually mean your web app drops the connection. Does the app crash?