remotion icon indicating copy to clipboard operation
remotion copied to clipboard

Lambda: Custom output destination not working for Tigris (S3-compatible)

Open empz opened this issue 7 months ago • 11 comments

According to this docs (https://www.remotion.dev/docs/lambda/custom-destination#saving-to-another-cloud), we should be able to output a video to any S3-compatible storage.

Tigris is an S3-compatible storage and I've been using the official AWS S3 SDK as they recommend to work with it, no issues. https://www.tigrisdata.com/docs/api/s3/

When trying to:

renderMediaOnLambda({
    functionName: functionName,
    region: REGION,
    serveUrl: siteName,
    composition: compositioName,
    inputProps: inputProps,
    webhook: webhook,
    deleteAfter: "30-days",
    timeoutInMilliseconds: 90_000, // 90 seconds
    downloadBehavior: {
      type: "download",
      fileName: null,
    },
    outName: {
      key: generateId(12),
      bucketName: "my.bucket.name",
      s3OutputProvider: {
        endpoint: TIGRIS_ENDPOINT,
        accessKeyId: env.TIGRIS_ACCESS_KEY_ID,
        secretAccessKey: env.TIGRIS_SECRET_ACCESS_KEY,
      },
    },
    privacy: "no-acl"
  });

I get the following error:

{
  "message": "Unable to access item \"OYqtngXszMav\" from bucket \"my.bucket.name\" (S3 Endpoint = https://fly.storage.tigris.dev). The Lambda role must have permission for both \"s3:GetObject\" and \"s3:ListBucket\" actions.",
  "name": "Error",
  "stack": "Error: Unable to access item \"OYqtngXszMav\" from bucket \"my.bucket.name\" (S3 Endpoint = https://fly.storage.tigris.dev). The Lambda role must have permission for both \"s3:GetObject\" and \"s3:ListBucket\" actions.\n    at zwn (/var/task/index.js:152:24007)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async IRn (/var/task/index.js:152:47803)\n    at async ORn (/var/task/index.js:152:52773)\n    at async xAn (/var/task/index.js:153:26375)\n    at async Runtime.handleOnceStreaming (file:///var/runtime/index.mjs:1206:26)"
}

As mentioned, I'm able to instantiate and use a standard S3Client using the following code, with the same endpoint and credentials

new S3Client({
      region,
      endpoint,
      credentials: {
        accessKeyId,
        secretAccessKey,
      },
    });

One particularity that's worth mentioning is that the bucket name is actually in the shape of subdomain.domain.tld because Tigris requires the bucket name to match the CNAME record if you want to use custom domains (which I am using).

empz avatar May 02 '25 11:05 empz

Try also setting the correct s3OutputProvider.region value.

You get this error if the response is 403. I'll also better surface the original error in case it has any information: https://github.com/remotion-dev/remotion/pull/5230

JonnyBurger avatar May 02 '25 13:05 JonnyBurger

Try also setting the correct s3OutputProvider.region value.

You get this error if the response is 403. I'll also better surface the original error in case it has any information: #5230

Tigris uses "auto" as the region which the current types do not allow for.

Image

empz avatar May 02 '25 17:05 empz

Anyway, ts-ignoring the type error doesn't work either. Fails with the same error. Also tried with and without forcePathStyle.

progress.json

{
  "chunks": [],
  "framesRendered": 0,
  "framesEncoded": 0,
  "combinedFrames": 0,
  "timeToCombine": null,
  "timeToEncode": null,
  "lambdasInvoked": 0,
  "retries": [],
  "postRenderData": null,
  "timings": [],
  "renderMetadata": null,
  "errors": [
    {
      "chunk": null,
      "frame": null,
      "name": "Error",
      "stack": "Error: Unable to access item \"C7W9dje8hJ7S\" from bucket \"renders-dev.mykaraoke.video\" (S3 Endpoint = https://fly.storage.tigris.dev). The Lambda role must have permission for both \"s3:GetObject\" and \"s3:ListBucket\" actions.\n    at zwn (/var/task/index.js:152:24007)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async IRn (/var/task/index.js:152:47803)\n    at async ORn (/var/task/index.js:152:52773)\n    at async xAn (/var/task/index.js:153:26375)\n    at async Runtime.handleOnceStreaming (file:///var/runtime/index.mjs:1206:26)",
      "type": "stitcher",
      "isFatal": true,
      "tmpDir": null,
      "attempt": 1,
      "totalAttempts": 1,
      "willRetry": false,
      "message": "Unable to access item \"C7W9dje8hJ7S\" from bucket \"renders-dev.mykaraoke.video\" (S3 Endpoint = https://fly.storage.tigris.dev). The Lambda role must have permission for both \"s3:GetObject\" and \"s3:ListBucket\" actions."
    }
  ],
  "timeToRenderFrames": null,
  "timeoutTimestamp": 1746209740596,
  "functionLaunched": 1746209140621,
  "serveUrlOpened": 1746209140989,
  "compositionValidated": 1746209141068,
  "receivedArtifact": []
}

I triple-checked my credentials and also created a fresh new bucket and keys. Same error.

empz avatar May 02 '25 18:05 empz

@empz Can you check with the latest version? We now properly propagate the error messages if there are any.

I would like to not to have to make accounts for every single S3 compatible storage..

JonnyBurger avatar May 06 '25 12:05 JonnyBurger

@empz Can you check with the latest version? We now properly propagate the error messages if there are any.

I would like to not to have to make accounts for every single S3 compatible storage..

@JonnyBurger I'm unable to deploy the Lambda function on 4.0.297 (I can in 4.0.295). Exact same deploy.mjs script.

4.0.295

$ node src/deploy.mjs

Selected region: us-east-2
Deploying Lambda function... remotion-render-4-0-295-mem3008mb-disk10240mb-600sec (already existed)
Ensuring bucket... remotionlambda-useast2-XXXXXX (already existed)
Deploying site... mykaraoke-video-XXXXXX

You now have everything you need to render videos!
Re-run this command when:
  1) you changed the video template
  2) you changed lambda-config.mjs
  3) you upgraded Remotion to a newer version

4.0.297

$ node src/deploy.mjs

Selected region: us-east-2
Deploying Lambda function... node:internal/modules/run_main:128
    triggerUncaughtException(
    ^

Error: ENOENT: no such file or directory, open 'C:\Users\epari\Coding\remotionlambda-arm64.zip'
    at Object.openSync (node:fs:573:18)
    at readFileSync (node:fs:452:35)
    at Object.createFunction (file:///C:/Users/epari/Coding/mykaraoke-video.worktrees/beta/node_modules/@remotion/lambda/dist/esm/index.mjs:9626:31)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async internalDeployFunction (file:///C:/Users/epari/Coding/mykaraoke-video.worktrees/beta/node_modules/@remotion/lambda/dist/esm/index.mjs:9971:19)
    at async file:///C:/Users/epari/Coding/mykaraoke-video.worktrees/beta/node_modules/@remotion/renderer/dist/esm/error-handling.mjs:338:14
    at async file:///C:/Users/epari/Coding/mykaraoke-video.worktrees/beta/src/deploy.mjs:48:3 {
  errno: -4058,
  code: 'ENOENT',
  syscall: 'open',
  path: 'C:\\Users\\epari\\Coding\\remotionlambda-arm64.zip'
}

Node.js v20.18.2

empz avatar May 06 '25 16:05 empz

It looks like it's looking for the zip at the wrong location. That's not the repository directory.

I'm using bun as the package manager, not sure if that's related but anyway it was working fine until 4.0.297

Image

empz avatar May 06 '25 16:05 empz

Very sorry about this! v4.0.298 is now out reverting the change.

JonnyBurger avatar May 06 '25 17:05 JonnyBurger

Unfortunately this doesn't work and the extra error doesn't help.

import { env } from "./env";
import { createLogger } from "./lib/logger";
import { defaultCompositionProps } from "./types/constants";
import { DISK, RAM, REGION, TIMEOUT } from "./lambda-config.mjs";
import { getRemotionSiteName } from "./server/remotion";
import {
  speculateFunctionName,
  renderMediaOnLambda,
} from "@remotion/lambda/client";
import { generateId } from "./lib/id";
import { TIGRIS_ENDPOINT } from "./server/s3";
import {
  GetObjectCommand,
  ListObjectsV2Command,
  PutObjectCommand,
  S3Client,
} from "@aws-sdk/client-s3";
const log = createLogger("playground");

const functionName = speculateFunctionName({
  diskSizeInMb: DISK,
  memorySizeInMb: RAM,
  timeoutInSeconds: TIMEOUT,
});

const siteName = getRemotionSiteName();

const outKey = generateId(12);

const s3 = new S3Client({
  region: "auto",
  endpoint: TIGRIS_ENDPOINT,
  credentials: {
    accessKeyId: env.TIGRIS_ACCESS_KEY_ID,
    secretAccessKey: env.TIGRIS_SECRET_ACCESS_KEY,
  },
});

// List objects in bucket
const { Contents } = await s3.send(
  new ListObjectsV2Command({
    Bucket: env.TIGRIS_RENDERS_BUCKET_NAME,
  }),
);
log.info(`There are ${Contents?.length ?? 0} objects in the bucket`);

// Write a test file to the bucket
await s3.send(
  new PutObjectCommand({
    Bucket: env.TIGRIS_RENDERS_BUCKET_NAME,
    Key: "test.txt",
    Body: "Hello, world!",
    ContentType: "text/plain",
  }),
);

// Get object from bucket
const { ContentLength } = await s3.send(
  new GetObjectCommand({
    Bucket: env.TIGRIS_RENDERS_BUCKET_NAME,
    Key: "test.txt",
  }),
);
log.info({ ContentLength }, "ContentLength");

const { renderId } = await renderMediaOnLambda({
  functionName: functionName,
  region: REGION,
  serveUrl: siteName,
  composition: "KaraokeVideo",
  inputProps: defaultCompositionProps,
  downloadBehavior: {
    type: "download",
    fileName: null,
  },
  codec: "h264",
  outName: {
    key: outKey,
    bucketName: env.TIGRIS_RENDERS_BUCKET_NAME,
    s3OutputProvider: {
      endpoint: TIGRIS_ENDPOINT,
      accessKeyId: env.TIGRIS_ACCESS_KEY_ID,
      secretAccessKey: env.TIGRIS_SECRET_ACCESS_KEY,
      // @ts-expect-error
      region: "auto",
    },
  },
  privacy: "no-acl", // tried with and without, same thing
});

log.info({ renderId }, "Render ID");

outpus

❯ bun run .\src\playground.ts

[20:32:07.474] INFO (58296): There are 7 objects in the bucket
    module: "playground"
[20:32:07.622] INFO (58296): ContentLength
    module: "playground"
    ContentLength: 13
[20:32:08.245] INFO (58296): Render ID
    module: "playground"
    renderId: "mj6y6p932h"

progress.json on AWS S3 > mj6y6p932h

{
  "chunks": [],
  "framesRendered": 0,
  "framesEncoded": 0,
  "combinedFrames": 0,
  "timeToCombine": null,
  "timeToEncode": null,
  "lambdasInvoked": 0,
  "retries": [],
  "postRenderData": null,
  "timings": [],
  "renderMetadata": null,
  "errors": [
    {
      "chunk": null,
      "frame": null,
      "name": "Error",
      "stack": "Error: Unable to access item \"0kWVKPN2FyyT\" from bucket \"renders-dev.mykaraoke.video\" (S3 Endpoint = https://fly.storage.tigris.dev) - got a 403 error when heading the file. Check your credentials and permissions. The Lambda role must have permission for both \"s3:GetObject\" and \"s3:ListBucket\" actions.\n    at iRn (/var/task/index.js:152:24007)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async WRn (/var/task/index.js:152:47893)\n    at async HRn (/var/task/index.js:152:52918)\n    at async qAn (/var/task/index.js:153:26373)\n    at async Runtime.handleOnceStreaming (file:///var/runtime/index.mjs:1206:26)",
      "type": "stitcher",
      "isFatal": true,
      "tmpDir": null,
      "attempt": 1,
      "totalAttempts": 1,
      "willRetry": false,
      "message": "Unable to access item \"0kWVKPN2FyyT\" from bucket \"renders-dev.mykaraoke.video\" (S3 Endpoint = https://fly.storage.tigris.dev) - got a 403 error when heading the file. Check your credentials and permissions. The Lambda role must have permission for both \"s3:GetObject\" and \"s3:ListBucket\" actions."
    }
  ],
  "timeToRenderFrames": null,
  "timeoutTimestamp": 1746556928223,
  "functionLaunched": 1746556328225,
  "serveUrlOpened": 1746556328682,
  "compositionValidated": 1746556328714,
  "receivedArtifact": []
}

empz avatar May 06 '25 18:05 empz

Now, I'm not familiar enough with S3, but shouldn't this just need PutObjectCommand to write the output file? Why does it need s3:GetObject (supported by Tigris) and s3:ListBucket (this might be the problem as it's not even mentioned in https://www.tigrisdata.com/docs/api/s3/ - there's ListBuckets but that's not the same thing I think).

empz avatar May 06 '25 18:05 empz

@empz This is because by default Remotion checks if you are not overwriting the file!

Maybe you want to set overwrite: true on renderMediaOnLambda() to disable that check

JonnyBurger avatar May 07 '25 13:05 JonnyBurger

@empz This is because by default Remotion checks if you are not overwriting the file!

Maybe you want to set overwrite: true on renderMediaOnLambda() to disable that check

Cool, that worked!

Just 2 small things.

  1. When passing outName.key the extension must be included. Is there any exported function from Remotion to get it from the options passed? I wouldn't want to write my own logic for this. Also, if you define an s3OutputProvider but don't pass a key, Typescript doesn't complain but the Lambda fails with TypeError: The S3 key must be a string. - Looks like the typing could be improved. Or even better, if outName.key is not defined, use the default. I don't see why that would not be possible when using another cloud storage.

  2. Can we do something about the region accepting non-AWS regions? Like if you define s3OutputProvider is because you're not using AWS, right? So then why is the region typed with AWS regions?

Happy to make a PR if you agree.

Thanks!

empz avatar May 08 '25 18:05 empz