convex-backend
convex-backend copied to clipboard
Object part missing hash - DigitalOcean Spaces
I have been trying to set up convex to use DigitalOcean Spaces to work with convex, but I'm facing some issues.
https://docs.digitalocean.com/products/spaces/reference/s3-compatibility/
It seems it gets a header error but based on the docs, DigitalOcean does generate the required headers.
Env variables that I used (with redactions):
- AWS_REGION=sgp1
- S3_ENDPOINT_URL=https://sgp1.digitaloceanspaces.com
- AWS_ACCESS_KEY_ID=DO00Z....
- AWS_SECRET_ACCESS_KEY=EvD1E7....
- S3_STORAGE_EXPORTS_BUCKET=bucket-exports
- S3_STORAGE_FILES_BUCKET=bucket-user-files
- S3_STORAGE_MODULES_BUCKET=bucket-modules
- S3_STORAGE_SEARCH_BUCKET=bucket-search-indexes
- S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET=bucket-imports
The error that I receive in convex is:
2025-05-04T16:38:47.420031Z ERROR common::errors: Caught error (RUST_BACKTRACE=1 RUST_LOG=info,common::errors=debug for full trace): Object part missing hash! Expected crc32
Noting that this is a self-hosted instance of convex - any help is much appreciated! I would prefer to use DigitalOcean and not Amazon S3. I just tested the s3 integration and after having to regenerate the access token twice, it finally worked.
The file storage code is more battle tested with S3 than with other providers.
@Spioune worked on R2 support and might be able to help you out if they have time. https://github.com/get-convex/convex-backend/pull/53
If you want to take a stab at it yourself, you can get the full stack trace via the instructions in the error message
RUST_BACKTRACE=1 RUST_LOG=info,common::errors=debug
Then work from there to understand the missing hash issue and submit a fix if necessary.
Ideally if it's claiming to be s3 compatible, then it should just work, but it seems like maybe there's some additional requirements.
that would be great! ran the server with the log and backtrace but didn't get much information:
backend-1 | 2025-05-04T23:07:11.107685Z INFO convex-cloud-http: [] 192.168.65.1:26052 "OPTIONS /api/storage/upload?token=01618ac6018107950d93a0aed3bcad3f0225cf3b188d82fb5c826cc7a4e66652bb9a32e20ace5e6337301ec63a4b2cc8a2022390c2ed77bc9efe9c07e86900bd03d4655d9f HTTP/1.1" 200 "http://127.0.0.1:6791/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36" - - 0.032ms
backend-1 | 2025-05-04T23:07:11.108846Z INFO file_storage::core: Uploading with content length Some(ContentLength(3747))
backend-1 | 2025-05-04T23:07:11.230592Z ERROR common::errors: Caught error (RUST_BACKTRACE=1 RUST_LOG=info,common::errors=debug for full trace): Object part missing hash! Expected crc32
backend-1 | 2025-05-04T23:07:11.230655Z DEBUG common::errors: Object part missing hash! Expected crc32
backend-1 |
backend-1 | Stack backtrace:
backend-1 | 0: <unknown>
backend-1 | 1: <unknown>
backend-1 | 2025-05-04T23:07:11.231174Z DEBUG common::errors: Not reporting above error: SENTRY_DSN not set.
backend-1 | 2025-05-04T23:07:11.231274Z INFO convex-cloud-http: [] 192.168.65.1:26052 "POST /api/storage/upload?token=01618ac6018107950d93a0aed3bcad3f0225cf3b188d82fb5c826cc7a4e66652bb9a32e20ace5e6337301ec63a4b2cc8a2022390c2ed77bc9efe9c07e86900bd03d4655d9f HTTP/1.1" 500 "http://127.0.0.1:6791/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36" application/json - 122.562ms
ah yeah - I remember that the prebuilt docker image is built in release mode with debug symbols stripped (to save space)
I just found the spot from searching https://github.com/get-convex/convex-backend/blob/c8b078060b57c70d355f729b70936cd7ce7a83ae/crates/aws_s3/src/types.rs#L64
You can also build from source https://github.com/get-convex/convex-backend/blob/main/BUILD.md - and then the backtrace will work fully. You (or whoever picks this up) will have to build from source anyway to try to do a fix.
It seems like Digital Ocean doesn't return crc32 field in the response to the UploadPart call like S3 does.
So it seems like Digital Ocean is slightly overpromising on its claim of being S3 compatible. A path forward might be to set up some logic to tolerate missing crc32 if you set an env var (eg S3_STORAGE_SKIP_CHECKSUMMING) or something like that. I wouldn't be willing to have it on by default, because we care about checksumming in general, but it might be what you need to work with digital ocean's storage.
Yeah I'm also not a big fan of skipping checksum, but I'd be willing to fiddle with the source to see if I can get something that's acceptable running and submit a PR for review if we get somewhere that's acceptable.
Is there any documentation to have a docker image built directly with docker? I have tried building both latest release and main branch using the self hosted backend dockerfile script but they don't seem to run successfully after build - so I might be missing something.
We've typically gone with building from source on the host machine. That's what I'd recommend and usually do personally.
https://github.com/get-convex/convex-backend/blob/main/BUILD.md
just run-local-backend after installing dependencies.
If ya really want to build it inside a docker container, you can use this https://github.com/get-convex/convex-backend/tree/main/self-hosted/docker-build
May also be worth scouring digital ocean's API docs - perhaps there's a way to get it to do checksumming that is slightly different. Whatever you come up with - I'll take a look.
we did just update the AWS SDK versions very recently (not yet in the prebuilt docker image, but will appear if you build from source) https://github.com/get-convex/convex-backend/commit/d340923c43c7005204c77c530d56e4f7d5563e83
So it may be worth testing with that to see if there are any differences.
You could also consider in parallel - filing an issue with digital ocean spaces. See if they're willing to fix on their end.
Same issue with Aliyun OSS(Alibaba Cloud) Compatibility with Amazon S3
#198 should fix / give you an environment variable to bypass this problem. Will go out with the next release (within a week or so)