graphql-upload
graphql-upload copied to clipboard
Support AWS Lambda
It's hard to use the actual processRequest
for Lambda because it does not give me a request
, response
model to follow to write a middleware. I've rewritten it to accept an event
given in
exports.handler = async (event, context) => {}
https://gist.github.com/samin/976271f97b03bba14017a760570df88a
Would it be possible to add it?
Since AWS will gather the whole request in memory before passing it on to your function, any workarounds to try to simulate a request stream for processRequest
which then buffers it to "disk" all over again via fs-capacitor
would be a performance tragedy.
For propper AWS Lambda support we could publish a function that processes an event into something that can be passed into existing GraphQL handlers, without using fs-capacitor
. It would read the GraphQL multipart request form data (Base64?) string provided in the Lambda event, and extract the files and query and variables, putting them together to look like a regular GraphQL POST request for popular GraphQL handlers (e.g. apollo-server-lambda
) to consume.
I haven't thought a lot about the details, and don't plan to work on it personally anytime soon as I don't have a need for it for any of my work or personal projects. I prefer to process file uploads as the stream in, for a much faster response time:
https://github.com/jaydenseric/graphql-multipart-request-spec/blob/master/sync-vs-async-graphql-multipart-request-middleware.svg
A function for Lambda should actually be a much simpler implementation that the current processRequest
implementation for HTTP requests that stream in, because you can tell up front if the form data is malformed or not and you know all the files are there; you don't have to handle all the problems of expected files never arriving, disconnects part way through, multiple resolvers consuming the same file streams etc.
The GraphQL upload scalar promise in a serverless environment could resolve the same file upload details, except instead of createReadStream
there would be base64FileContents
(name can be polished).
I 100% agree that because the current implementation is so highly optimized for the case of streaming data, it would be a total disaster to try and use it for data that is already in memory.
However, for the sake of maintaining the portability of users' resolvers, it might be best to retain the same API as this library (that with a promise which resolves metadata and a createReadStream
method). The created streams would simply read from the corresponding file's Buffer
and have nothing to do with the filesystem or fs-capcitor. While these extra hoops wouldn't strictly be necessary, they would have almost no performance penalty (assuming the resolver actually uses the stream as a stream and doesn't immediately try to concat it back into a new in-memory string/Buffer) and would allow the same resolvers to be usable in both a HTTP server and a lambda/google function.
I think this would be fairly trivial to write, although like you I have neither need for this at the moment, nor the time to actually do it.
Hi @samin
I've recently forked this awesome package and added AWS Lambda support. I need someone to test it. Is there anyone out there who'd like to report if this works?
https://www.npmjs.com/package/graphql-upload-minimal/v/1.1.0-0#aws-lambda
TL;DR:
- Replace graphql-upload with graphql-upload-minimal@next
- Test if it works for you.
- Put a comment here that it work (or not).
FYI after going down a rabbit hole with this, graphql-upload
seems to be working nicely with Apollo v3 on lambda out of the box. We are using the Express middleware method for setting up the server - I guess @vendia/serverless-express
which apollo-server-lambda
now uses is helping put the request object into the correct format.
@vaunus , incidentally I was looking at it today. When you say
We are using the Express middleware method for setting up the server
What do you mean?
@federicobarera sorry I meant we're setting the Apollo handler up like this:
const handler = apolloServer.createHandler({
expressAppFromMiddleware(middleware) {
const app = express()
app.use(graphqlUploadExpress())
app.use(middleware)
return app
}
})
In Apollo v3 the above is all that is required for graphql-upload
to work in a lambda.
@vaunus gem!
@vaunus by any chance did you use in conjunction with the serveless framework? If so, I've been hitting issues when working offline, as reported by multiple people here: https://github.com/dherault/serverless-offline/issues/464 I am wondering if you did hit the same issue (corrupted streams when offline)
@federicobarera no, we are not using the serverless
framework or serverless-offline
. I actually spent weeks trying to get these working for our complex use case well before looking at file uploads and decided to avoid them as had too many problems and bugs.
I instead went with AWS SAM for the deployment side, and for local dev implemented our own custom server which is a basic Express server that maps Request
into the lambda-local package event parameter. This is all working for file uploads but it will take some effort to set it up. In this wrapper server I had to add a middleware that checks the content type for multipart form data and if found, read the stream, base64 encode it and set req.body
to that string. It then also sets isBase64Encoded=true
in the lambda event.
This is effectively replicating what a real lambda does with a binary body arriving encoded as base64.
@vaunus by any chance did you use in conjunction with the serveless framework? If so, I've been hitting issues when working offline, as reported by multiple people here: dherault/serverless-offline#464 I am wondering if you did hit the same issue (corrupted streams when offline)
It seems to be working fine for me with serverless-offline (8.1.0). What errors do you get?
@deniapps The issue, as pointed out by vaunus, is with the body of the request arriving with the wrong format / encoding. There is no exception, however when saving the stream to disk or anywhere else, the file is corrupted. It seems to work ok with certain file types, but binaries (images and so on) are encoded incorrectly.
I am running serverless-offline 8.0.0
. I will give it a go with 8.1. I'll let you know
@deniapps The issue, as pointed out by vaunus, is with the body of the request arriving with the wrong format / encoding. There is no exception, however when saving the stream to disk or anywhere else, the file is corrupted. It seems to work ok with certain file types, but binaries (images and so on) are encoded incorrectly.
I am running serverless-offline
8.0.0
. I will give it a go with 8.1. I'll let you know
You're right. The file is corrupted that I did not notice. ~~Do you mean it works for serverless (not offline)? I have NOT tried it on production yet because there is another issue.~~ Yes, it works with serverless as I tested.
Any workaround for this? I was trying to move my NestJS GraphQL project that supports file upload to AWS lambda. While debugging the file upload errors I came across this weird behavior:
On the ec2 instance the file input that I receive in the mutation argument is of format:
{
file: {
filename: 'test_image.png',
mimetype: 'image/png',
encoding: '7bit',
createReadStream: [Function: createReadStream]
}
}
In this case file upload works fine.
But on lambda function, the same file input in the mutation argument is of form
{
file: Upload {
resolve: [Function (anonymous)],
reject: [Function (anonymous)],
promise: Promise {
[Object],
[Symbol(async_id_symbol)]: 385,
[Symbol(trigger_async_id_symbol)]: 367,
[Symbol(kResourceStore)]: undefined
},
file: {
filename: 'test_image.png',
mimetype: 'image/png',
encoding: '7bit',
createReadStream: [Function: createReadStream]
}
}
}
It seems like the Upload scalar is not working properly in the lambda function
Package versions
@nestjs/graphql: v10.0.5
graphql: 16.3.0
graphql-upload: 13.0.0
@piyushk96 I forked this module and made it work in Lambdas: https://github.com/flash-oss/graphql-upload-minimal#aws-lambda
const { processRequest } = require("graphql-upload-minimal");
module.exports.processRequest = function (event) {
return processRequest(event, null, { environment: "lambda" });
};
@koresar I'm unable to use your package as it has peer dependency of Graphql v.13.1 - 15 and I'm using Graphql v16. So, npm is throwing an error when I try to install your package
Oh. Sorry about that. I can easily fix that on Monday. Or, perhaps, you can submit a PR fixing that. I'll publish immediately.
On Sat, 9 Apr 2022, 02:50 Piyush Kakkar, @.***> wrote:
@koresar https://github.com/koresar I'm unable to use your package as it has peer dependency of Graphql v.13.1 - 15 and I'm using Graphql v16. So, npm throwing an error when I try to install your package
— Reply to this email directly, view it on GitHub https://github.com/jaydenseric/graphql-upload/issues/155#issuecomment-1093082894, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMMEL6UPZTROD7SQHKJ74LVEBPU5ANCNFSM4HZMSQHA . You are receiving this because you were mentioned.Message ID: @.***>
I've got you sorted @piyushk96
The graphql-upload-minimal
updated to support graphql v16.
Thanks @koresar for the update. But the original graphql-upload
package just worked for me. There was a mistake in my code due to which the Upload
scalar's resolver was not executing and I was getting the wrong input format. I can confirm @vaunus that graphql-upload
works out of the box with apollo v3 with aws lambda using @vendia/serverless-express