How to run LLRT binary locally with AWS_LAMBDA_RUNTIME_API
After doing some research I got to know AWS Lambda makes internal HTTP requests to read and write data.
I would like to know what's the easiest way to bypass AWS_LAMBDA_RUNTIME_API env variable setup.
I am getting the following error:
2024-02-27T19:59:13.696Z n/a ERROR at nextInvocation (@llrt/runtime:3:178)
2024-02-27T19:59:13.696Z n/a ERROR error: failed to get next response Error: client error (Connect)
Hi @ShivamJoker. What are you trying to do? If not running on Lambda this is not needed :)
Hi @richarddavison I've downloaded the Linux x86 llrt binrary from release.
I was hoping I could test my function by just executing the ./llrt and setting handler environment variable.
If I manually call the handler() function the code works but I would like to have some customization.
The best way would be to import your function handler and simply call it:
import {handler} from "./my-handler.js"
async function main(){
const myEvent = {}; //read a file or hardcode
const handlerResponse = await handler(myEvent)
console.log({handlerResponse})
}
main().catch(console.error)
I see, so should I name the file index.mjs and run the code above from ./llrt?
Yes! by default, llrt will look for index.mjs in the same dir so you can just do: ./llrt or ./llrt index.mjs
@richarddavison thanks for this answer.
I am trying to follow these steps on MacOS (Apple M1) but get an error:
bash: ./llrt: cannot execute binary file
I'm using the llrt-linux-arm64 release.
@abhinav-sachdeva you're not on Linux you're on Mac so please use drawing ARM64 🙂 https://github.com/awslabs/llrt/releases/download/v0.1.12-beta/llrt-darwin-arm64.zip