OpenSign icon indicating copy to clipboard operation
OpenSign copied to clipboard

[Bug]: S3 Bucket are not working with docker compose

Open Bambijow opened this issue 8 months ago • 25 comments

Issue Description

After installing and configuring OpenSign with docker in local to use bucket OpenSign is not working. When uploading a file app returns error 400 with {"code":130,"error":"UnknownError"} as a response.

Expected Behavior

OpenSign should be able to upload and read from the bucket

Current Behavior

OpenSign returns a 400 bad request response with this body : {"code":130,"error":"UnknownError"}

Steps to reproduce

Install OpenSign in local with Docker Compose, use local storage -> everything is okay. Use S3 Bucket configuration -> OpenSign returns a 400 bad request response with {"code":130,"error":"UnknownError"} as a body.

Screenshots of the issue(optional)

No response

Operating System [e.g. MacOS Sonoma 14.1, Windows 11]

Ubuntu 24.04.2

What browsers are you seeing the problem on?

Chrome, Firefox, Safari, Microsoft Edge

What version of OpenSignâ„¢ are you seeing this issue on? [e.g. 1.0.6]

2.15.0 (With 1.4.2 everything works)

What environment are you seeing the problem on?

Dev (localhost or vercel)

Please check the boxes that apply to this issue report.

  • [x] I have searched the existing issues & discussions to make sure that this is not a duplicate.

Code of Conduct

  • [x] I agree to follow this project's Code of Conduct
  • [x] I have searched the existing issues & discussions to make sure that this is not a duplicate.

Bambijow avatar Apr 03 '25 07:04 Bambijow

Did you configure the storage options in your .env.prof file correctly? The simplest way to get going is to set "uselocal" to true.

andrew-opensignlabs avatar Apr 03 '25 07:04 andrew-opensignlabs

Yes, here is my storage option in my .env

DO_SPACE=bucket name DO_ENDPOINT=s3.eu-west-1.amazonaws.com DO_BASEURL=https://s3.eu-west-1.amazonaws.com/ DO_ACCESS_KEY_ID=access key DO_SECRET_ACCESS_KEY=secret DO_REGION=eu-west-1 USE_LOCAL=false

I already tried changing DO_BASEURL with https://bucket-name.s3.eu-west-1.amazonaws.com/

Bambijow avatar Apr 03 '25 07:04 Bambijow

@andrew-opensignlabs

Hello Any news ?

I have the same issue on s3

Thank you

NicolasDierick avatar Apr 16 '25 18:04 NicolasDierick

@Bambijow @NicolasDierick can you share the OpenSign server logs?

andrew-opensignlabs avatar Apr 16 '25 18:04 andrew-opensignlabs

My Opensign Server logs :

> [email protected] start
> node index.js

Please provide valid SMTP credentials
opensign-server running on port 8080.
Command output: 
Parse DBTool v1.2.0 - Parse server tool for data migration and seeding.

Run migration on parse-server at http://localhost:8080/app

 INFO  No migrations were executed, database schema was already up to date.

 SUCCESS  Successfully run migrations.


 INFO  No migrations were executed, database schema was already up to date.
 SUCCESS  Successfully ran indexed migrations directly on db.

The error sent to my browser : Request URL : https://localhost:3001/app/files/ntTnEoxFXXDOm5fY.pdf {"code":130,"error":"UnknownError"}

Bambijow avatar Apr 17 '25 06:04 Bambijow

same thing on my side,

strange behaviour, seems that he request does'nt know what to do at all

It works with "USE_LOCAL" "true", but on my side, I can't deploy a persistent volume and need to work with Bucket.

@andrew-opensignlabs Do you have the same issue with a bucket ?

NicolasDierick avatar Apr 23 '25 06:04 NicolasDierick

I'm having the same issue when trying to upload a new template.

https://{my public domain}/api/app/files/x3ECLqNq4II99MON.pdf

POST 400 Bad Request {"code":130,"error":"UnknownError"}

I'm also using Docker Compose and S3. Same Server logs. My config is:

DO_SPACE=bucket-name DO_ENDPOINT=s3.us-east-1.amazonaws.com DO_BASEURL=https://bucket-name.s3.us-east-1.amazonaws.com DO_ACCESS_KEY_ID=access key DO_SECRET_ACCESS_KEY=secret DO_REGION=us-east-1 USE_LOCAL=false

When I saw this error I was trying to upload a new PDF template document.

I don't know if this is significant or not, but I noted when trying to upload a PDF it had the correct content type in the payload:

_ContentType: "application/pdf"

I thought maybe I couldn't upload PDFs, so I considered another MIME type that might work. I tried uploading a random PNG, but it failed again with the same error. However, the content type still indicated it was a PDF. Curious. Does it attempt to convert any provided content to a PDF for this endpoint? For example, if I had a scan and provided it as my document, is it trying to convert that to a PDF format before sending it off?

Then wondered if just this portion of the app is broken with uploading. So I tried replacing the default profile picture with a random PNG, but it failed with the same error:

POST 400 Bad Request {"code":130,"error":"UnknownError"}

But this time, I noted the MIME type was correct:

_ContentType: "image/png"

Curious that when uploading a template document, it seems to be "stuck" on application/pdf.

Regardless of that fact, multiple endpoints are not allowing me to upload and are failing with the same error. I'll happily provide any debug logs if I can get some guidance on changing the logging level.

mrcleanandfresh avatar May 30 '25 00:05 mrcleanandfresh

yeah same on me. I named my minio service as "minio.service". so in .env file "DO_ENDPOINT=http://minio.service:9000" but it doesn't work if "DO_SPACE" not blank. error will be raise like this, "NOTFOUND opensign.minio.service" if I fill "DO_SPACE=opensign"

donakhseputa avatar Jun 03 '25 08:06 donakhseputa

Same issue here - had to nuke my install recently and get it going again but getting the same 400 error for S3. Nothing in OpenSign or OpenSignServer logs but same console error.

EDIflyer avatar Jun 04 '25 20:06 EDIflyer

@andrew-opensignlabs, it appears that this is a serious and widespread issue that has been ongoing for two months now. Have you made any progress on a fix? Would you like some help? If you point me to the contribution documentation, I can get up and running locally and see what I can find.

mrcleanandfresh avatar Jun 05 '25 16:06 mrcleanandfresh

You can find the detailed steps to setup AWS S3 storage here.

andrew-opensignlabs avatar Jun 05 '25 19:06 andrew-opensignlabs

@andrew-opensignlabs have there been any changes to the steps/requirements? It used to work fine for me but now has the errors above.

EDIflyer avatar Jun 05 '25 19:06 EDIflyer

You can find the detailed steps to setup AWS S3 storage here.

Oh, I did that already, but am still getting those errors. I meant steps to get the project running locally so I can help debug and submit a PR.

mrcleanandfresh avatar Jun 05 '25 20:06 mrcleanandfresh

Any update on this @andrew-opensignlabs ?

EDIflyer avatar Jun 25 '25 21:06 EDIflyer

I think the problem is DO_SPACE, I guess if we have like minio service on local, DO_SPACE cannot to be subdomain. what I see, opensign always use DO_SPACE as subdomain for endpoint DO_ENDPOINT if DO_SPACE not blank. like my comment before

yeah same on me. I named my minio service as "minio.service". so in .env file "DO_ENDPOINT=http://minio.service:9000" but it doesn't work if "DO_SPACE" not blank. error will be raise like this, "NOTFOUND opensign.minio.service" if I fill "DO_SPACE=opensign"

but in local, bucket name not as subdomain of our minio service. I think opensign do not use DO_SPACE as subdomain to fix it

donakhseputa avatar Jun 26 '25 01:06 donakhseputa

Any update on this?

EDIflyer avatar Jul 14 '25 16:07 EDIflyer

I also have the same error. Followed the docs precisely. Anyone with a workaround?

dvdl16 avatar Aug 03 '25 17:08 dvdl16

no workaround on my side

@andrew-opensignlabs is S3 backend working on your side ?

NicolasDierick avatar Aug 07 '25 15:08 NicolasDierick

Same issue, get HTTP 400 when uploading file.

bzdk avatar Aug 12 '25 07:08 bzdk

Lots of releases since April, but still no fix 😢

NicolasDierick avatar Sep 16 '25 16:09 NicolasDierick

Agree, we've had to stop using OpenSign as a result sadly. No reply here or on Discord so I couldn't really be persuaded a fix was imminent either.

EDIflyer avatar Sep 16 '25 16:09 EDIflyer

I'm more than willing to help with this to get it done, but I need some clear instructions/documentation on how to get everything running locally from source. I can find the contribution guidelines. Anyone here who's a developer would be willing to do the same, I'm sure, to make progress on this issue. Clearly, @andrew-opensignlabs is overwhelmed or something, given that he hasn't replied here in three months for this severe bug affecting many people. But I can help with this if you need me.

mrcleanandfresh avatar Sep 17 '25 14:09 mrcleanandfresh

I had this issue after upgrading from v2.22.0 to v2.29.2 too. But after trying various things, managed to fix it. Hope someone can confirm if this was indeed the fix or something else I did. I'll only mention the changes I did from the .env.example file. I had used .env.local_dev before, but since now updated to match more closely to the former file.

  1. Updated nginx config
    proxy_pass http://127.0.0.1:8080/;
    
    to
    rewrite ^/api/(.*)$ /$1 break;
    proxy_pass http://127.0.0.1:8080;
    
  2. Made sure HOST_URL was set before docker compose to https://app.opensignlabs.com.
  3. Updated envfile (without trailing slash).
    PUBLIC_URL=https://app.opensignlabs.com
    
  4. Updated the Bucket permissions to match the examples in https://docs.opensignlabs.com/docs/self-host/cloud-storage/s3/ (I used cloudflare R2)

Somehow it fixed the S3 upload issues. I'm suspecting that the nginx rewrite might be the real fix.

SRChiP avatar Oct 12 '25 14:10 SRChiP

Unfortunately, this didn't work for me. It could be because I'm using AWS S3 and you're using Cloudflare R2, and perhaps a slight difference in protocol/packets means yours works and mine does not.

I'm using Caddy with the following Caddyfile, which is functionally the same as your nginx config:

{
	auto_https off
}

http://:80 {
	reverse_proxy client:3000
	handle_path /api/* {
		reverse_proxy server:8080
	}
}

Then a pretty standard docker-compose file:

docker-compose file
services:
  server:
    image: opensign/opensignserver:main
    container_name: OpenSignServer-container
    volumes:
      - opensign-files:/usr/src/app/files
    expose:
      - 8080
    depends_on:
      - mongo
    env_file: .env.prod
    environment:
      - NODE_ENV=production
      - SERVER_URL=${HOST_URL:-https://app.opensignlabs.com}/api/app
      - PUBLIC_URL=${HOST_URL:-https://app.opensignlabs.com}
    networks:
      - app-network
    restart: always
  mongo:
    image: mongo:latest
    container_name: mongo-container
    volumes:
      - data-volume:/data/db
    expose:
      - 27017
    networks:
      - app-network
    restart: always
  client:
    image: opensign/opensign:main
    container_name: OpenSign-container
    depends_on:
      - server
    env_file: .env.prod  
    expose:
      - 3000
    networks:
      - app-network
    restart: always
  caddy:
    image: caddy:latest
    container_name: caddy-container
    ports:
      - "32080:80"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - app-network
    environment:
      - HOST_URL=${HOST_URL:-https://app.opensignlabs.com}
    restart: always
networks:
  app-network:
    driver: bridge

volumes: data-volume: web-root: caddy_data: caddy_config: opensign-files:

I set the HOST_URL to the same as the PUBLIC_URL and it still gives me:

POST 400 Bad Request
{"code":130,"error":"UnknownError"}

mrcleanandfresh avatar Oct 12 '25 20:10 mrcleanandfresh

Got the same error on local, with a minio s3 backend. The thing with the subdomain is o.k. for me, i added a new domain in my nginx proxy manager and set the config of my other s3 url in there as well.

Still no luck... maybe trying going back to a prev. version?

Got anybody any other solution?

@andrew-opensignlabs i know you are busy, but how can we support you to get this fixed?

its4l3x avatar Oct 15 '25 00:10 its4l3x