plane
plane copied to clipboard
[bug]: external s3/minio
Is there an existing issue for this?
- [X] I have searched the existing issues
Current behavior
When I upload the logo, the loader is stuck, while on the console I get: "Something went wrong please try again later".
Steps to reproduce
- In my docker-compose.yml I disabled plane-minio, createbuckets and set USE_MINIO to 0.
- In my environment variables I did set AWS_S3_ENDPOINT_URL=s3.eu-west-2.wasabisys.com, AWS_REGION=eu-west-2 and all AWS related variables
Couldn't find any useful info neither on the docs, nor here among the issues.
Browser
Google Chrome
Version
Self-hosted
Hi @MyWay,
I noticed that you're encountering an issue with the endpoint URL. To resolve this, could you please try prepending https:// before the URL? This should ensure that the URL is using the correct protocol for the request.
Hi @MyWay,
I noticed that you're encountering an issue with the endpoint URL. To resolve this, could you please try prepending
https://before the URL? This should ensure that the URL is using the correct protocol for the request.
Hi, i have tried your suggestion, but I'm getting the same error.
Any update or solution for this. I'm having the same issue. I've even tried minio setup when s3 didn't work thinking there might be some issue with the s3 endpoint url but no luck. Still can't make uploads work, the upload api gives 400 with the message "Something went wrong please try again later", that's it. I guess one good way is to log the actual error in the console or the logger so it would be easier to debug and know where it is failing than just ignoring the error with such a broad exception catch. I'd be happy to open a PR regarding the same
Maybe this is helpful:
I had the same problem and debugging was pretty hard. It turned out, that my self hosted minio setup had TLS verification enabled. Those were custom signed TLS certificates and plane was not able to verify the certificate correctly. I completely disable the minio TLS setup and configured plane to use plain HTTP. After this I was able to upload assets.
I have this issue too. I'm using OpenStack / Swift and I just cannot get anything to upload.
In the browser:
POST
https://[redacted]/api/users/file-assets/
[HTTP/2 400 Bad Request 992ms]
Object { error: "Something went wrong please try again later" }
In Plane's logs:
planebackend | POST - /api/users/file-assets/ of Queries: 1
planebackend | Bad Request: /api/users/file-assets/
planebackend | 172.23.0.2:39992 - "POST /api/users/file-assets/ HTTP/1.1" 400
My env:
AWS_REGION="us-east-1" // Set by my Openstack provider
AWS_ACCESS_KEY_ID="[redacted]"
AWS_SECRET_ACCESS_KEY="[redacted]"
AWS_S3_ENDPOINT_URL="https://s3.[redacted]/object/v1/AUTH_[redacted]" // As per my Openstack provider's documentation
AWS_S3_BUCKET_NAME="plane-uploads"
FILE_SIZE_LIMIT=5242880
I'm not even sure how to diagnose this, there's nothing helpful at all in the logs.
A small update, issue not fully fixed however. In my Openstack case changing AWS_S3_ENDPOINT_URL="https://s3.[redacted]/object/v1/AUTH_[redacted]" to AWS_S3_ENDPOINT_URL="https://s3.[redacted]" allows for authentication and successful POST requests (eg file uploads.) However, actual bucket location is still at https://s3.[redacted]/object/v1/AUTH_[redacted]/plane-uploads/, which is not reflected in the AWS_S3_ENDPOINT_URL. This mismatch means the uploads can not be retrieved.
I solved it. My situation is more like django-storages issue.
- I print the exception. The error message is
An error occurred (AccessControlListNotSupported) when calling the PutObject operation: The bucket does not allow ACLs. - Make sure
AWS_S3_ENDPOINT_URL="https://s3.amazonaws.com", notAWS_S3_ENDPOINT_URL="https://[bucket_name].s3.amazonaws.com" - Edit
apiserver/plane/settings/production.py-
Make sure
AWS_S3_BUCKET_AUTH = True. It is important. -
You need edit
AWS_S3_ADDRESSING_STYLE. Based on aws region, you can refer toRegion Setting us-east-1 default or AWS_S3_SIGNATURE_VERSION = "s3v4" us-east-2 AWS_S3_ADDRESSING_STYLE = "virtual" ap-northeast-1 AWS_S3_SIGNATURE_VERSION = "s3v4" ap-southeast-2 AWS_S3_ADDRESSING_STYLE = "virtual" ap-south-1 AWS_S3_ADDRESSING_STYLE = "virtual" eu-central-1 AWS_S3_ADDRESSING_STYLE = "virtual" eu-central-1 AWS_S3_ADDRESSING_STYLE = "virtual" eu-west-1 AWS_S3_SIGNATURE_VERSION = "s3v4" AWS_S3_ADDRESSING_STYLE = "virtual" eu-west-2 AWS_S3_ADDRESSING_STYLE = "virtual" eu-west-3 AWS_S3_ADDRESSING_STYLE = "virtual" ca-central-1 AWS_S3_ADDRESSING_STYLE = "virtual" My aws region is
us-east-2. I replacedAWS_S3_ADDRESSING_STYLE = "auto"withAWS_S3_ADDRESSING_STYLE = "virtual". -
(optional)
AWS_S3_MAX_AGE_SECONDS = 7 * 24 * 60 * 60 -
(optional) Set
AWS_S3_PUBLIC_URL, if your bucket has a CDN link. (Although the comment sayThis setting cannot be used with "AWS_S3_BUCKET_AUTH", actually it works.)
-
Hey @MyWay, can you check by upgrading to the latest version. This issue should be fixed now. Let us know if you are still facing this issue.
Hey @MyWay, can you check by upgrading to the latest version. This issue should be fixed now. Let us know if you are still facing this issue.
How should I configure it for an alternative s3 in the latest version? Because upload is working, I can see the file in the bucket, but then the url generated by plane is pointing to my plane instance, so maybe I'm missing some setting.
You can turn the USE_MINIO environment to 0.
You can turn the
USE_MINIOenvironment to0.
I did set it at 0, the file is uploaded, the filename is correct, but the url is mydomain.ext/mybucketname instead of my external s3 compatible service.
Oh okay @MyWay, can you also remove the AWS_S3_ENDPOINT_URL.
Oh okay @MyWay, can you also remove the
AWS_S3_ENDPOINT_URL.
If I do won't it use Amazon s3? 🤔
It will be still using Amazon S3 only.
It will be still using Amazon S3 only.
But I'd like to use my s3 compatible provider.
Oh okay currently Plane only supports S3 and Minio what storage are you using @MyWay ?
Oh okay currently Plane only supports S3 and Minio what storage are you using @MyWay ?
I see, that's the issue with my instance then. I have tried both wasabi and synology, currently on synology.
Apparently only missing thing is to generate the correct url using AWS_S3_ENDPOINT_URL.
@MyWay, you may have to also update the nginx.conf to point to your file server.
@MyWay, you may have to also update the
nginx.confto point to your file server.
You mean I should rewrite urls to reflect my s3 instance, instead of expecting plane to do it?
You may have to do some changes the current plane setup either connects with S3 or minio for fetching assets for other services you may have to change the configuration a bit to make it work.
I see. Since all of them implement the same API, do you plan to add support to any of the s3 services?
Please see #3278 for using custom s3 endpoints
Please see #3278 for using custom s3 endpoints
It's still not working for custom s3 endpoints from what I see.
Couldn't upload files to AWS S3, I am using the script to run it on a linux server with docker. I did update the environment variables with the working AWS credentials.
DATA STORE SETTINGS
USE_MINIO=0 AWS_REGION="eu-west-1" AWS_ACCESS_KEY_ID="[reducted]" AWS_SECRET_ACCESS_KEY="[reducted]" AWS_S3_ENDPOINT_URL="https://s3.amazonaws.com" AWS_S3_BUCKET_NAME="[reducted]" FILE_SIZE_LIMIT=52428800
Is there some other change we have to make to enable it to upload to S3.
Getting the following error from plane-app-api-1
/api/workspaces/internal/file-assets/ HTTP/1.0" 400
I facing same issue when setup default minio.
Bellow my config
# DATA STORE SETTINGS
USE_MINIO=1
AWS_REGION=""
AWS_ACCESS_KEY_ID=""
AWS_SECRET_ACCESS_KEY=""
AWS_S3_ENDPOINT_URL=<my-url>
AWS_S3_BUCKET_NAME=uploads
MINIO_ROOT_USER="<my-user>"
MINIO_ROOT_PASSWORD="<my-password>"
BUCKET_NAME=uploads
FILE_SIZE_LIMIT=5242880
But in UI, i see it call api workspace/<name>/file-assets and it return 500 server error (Something went wrong please try again later)
Please help
This issue will be resolved when you update to the latest version of Plane.
I'm already using latest stable, which version you are referring exactly?
I'm already using latest stable, which version you are referring exactly?
@MyWay
I'm using AWS s3 , started with 500 error as well, then when I recreated the bucket, I enabled ACLs and it's working fine now, so you can test it from that.
Translated from DeepL