Reactive-Resume
Reactive-Resume copied to clipboard
[BUG] Photo upload fails
Describe the bug Picture upload fails to new resume
Product Flavor
- [Yes] Self Hosted
To Reproduce Create a new resume Upload a photo
Expected behavior Photo to be uploaded and displayed
Desktop (please complete the following information):
- OS: Docker latest version
- Browser Firefox
- Version 100
Additional context request sent to /api/resume/3/photo Give response 500 Internal Server Error
You might need to update your ENVs to have proper S3 credentials. Have they been added?
Could you provide a sample of s3 credentials? I've added my gateway and bucket name but still not working - confirmed through terminal access/read/write of s3 bucket.
You might need to update your ENVs to have proper S3 credentials. Have they been added?
have gone through the documentation but didnt notice about this. May be recently added.. is there is way to store locally instead of S3 ?
Could you provide a sample of s3 credentials? I've added my gateway and bucket name but still not working - confirmed through terminal access/read/write of s3 bucket.
Right, so after a bit of tinkering, the following configuration seems to be a good example:
STORAGE_BUCKET=MY_BUCKET_NAME
STORAGE_REGION=eu-central-1
STORAGE_ENDPOINT=https://s3.eu-central-1.amazonaws.com/
STORAGE_URL_PREFIX=https://MY_BUCKET_NAME.s3.eu-central-1.amazonaws.com/
STORAGE_ACCESS_KEY=IAMUSERACCESSKEY
STORAGE_SECRET_KEY=IAMUSERSECRETKEY
Note that the region needs to match the region of the bucket.
In my case I've created an IAM user, and used the credentials provided during generation as access key and secret. I've configured the bucket to allow public access. Does this help?
I'm still with sunny5055 though, I'm not very happy with being dependent on a cloud provider. I would rather see a choice between locally hosted and some storage bucket on another provider.
@modem7 - I don't understand how you were able to use local storage. It's hardcoded to use the S3 client.
I would prefer local storage anyway..
@modem7 - I don't understand how you were able to use local storage. It's hardcoded to use the S3 client.
I would prefer local storage anyway..
I think you might be right.
I used to be able to use local storage, but it seems in the latest version, whilst it's able to use what I had before, it doesn't allow me to upload new images now.
I would say that if reactive-resume is going "self-hosted", every part should be self-hosted, with zero reliance on 3rd party storage or APIs.
I certainly won't be using S3, and if that's the only solution, that'll be me out unfortunately, especially as I'm only hosting it for myself.
That's literally doesn't make any sense to have S3 cloud as part of requirements. I'm trying to deploy this for like a month already and still no luck. Firstly it was because of outdated YAML parser that portainer uses, secondly this weird requirement of S3 cloud.
How much longer will it take to deploy app like this?
BTW: i searched whole documentation of reactive resume and didn't find any storage related var.
Had issues deploying the self hosted docker following the instructions on the tutorial. Ran into issues related to the server not starting up correctly and it was due to the the S3 parameters that are not mentioned at all anywhere. I see no point in labeling this as selfhostable if it is still dependent on external parties for crucial features. Photo uploads do not work at all in the standalone version without S3 parameters. The default should be to use the fill system available to the server which can easily be mapped by the installer.
That's literally doesn't make any sense to have S3 cloud as part of requirements. I'm trying to deploy this for like a month already and still no luck. Firstly it was because of outdated YAML parser that portainer uses, secondly this weird requirement of S3 cloud.
Please understand that a growing app like this can have it's issues with fast-and-loose development practices. I am trying my best to keep it as simple, but also working as much as I can.
The YAML anchors issue was resolved later as I removed them and reverted back to adding ENV_VARS in docker's environment
array directly. The reason I had to add S3 as a requirement was because when users (even on self hosted) were uploading their images, the previous logic used to store these files locally. But because of the way I have CI/CD set up, a new instance is spun-up with the old one being discarded. This means, all old files on the filesystem also get deleted. So I had to move files to a non-ephemeral FS, hence DigitalOcean Spaces (otherwise S3).
I do hope to make S3 an optional requirement, and once I figure that out, will do what is required to make it simpler.
Can't you just store the files in a mapped volume/directory? This way they would be stored independently from the instance
@AmruthPillai Local mount storage would not get overwritten.
The image should point to an internal directory for images, which we can overwrite with a bind/volume mount.
That's typically how Docker works, the images themselves are ephemeral, but local storage is not.
While I managed to use a free tier scaleway bucket I also fail to see why S3 compatible is required, when locally mounted host folder or named docker volumes would allow permanent storage, very much compatible with a CI pipeline.
Basically s3 in attempt to solve some weird problem, https://github.com/AmruthPillai/Reactive-Resume/issues/818 and this will be good if the s3 work with self hosted s3 like minio.
Basically s3 in attempt to solve some weird problem, https://github.com/AmruthPillai/Reactive-Resume/issues/818 and this will be good if the s3 work with self hosted s3 like minio.
Sure, but given how heavy reactive resume already is with three containers, adding a fourth is not the direction this should head in, especially given docker volumes and bind mounts exist for this exact reason.
Well, if the reactive-resume store image/assets correctly, we may already use volume/bind mounts right now, but seems it not as simple as it should be, anywhere this head in, we just hope it will work for self hosted solution.
https://github.com/AmruthPillai/Reactive-Resume/pull/906
Once this gets approved & merged, add environmental variable: STORAGE_S3_ENABLED=false
currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.
currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.
If we mount the path (see documentation: https://docs.docker.com/storage/bind-mounts/) where the images are saved to a folder on the docker host, the images will be persisted (and stay available) - as I can't build the docker image, I created this pull request.
Does the patch work for you? Then I guess you could try the folder mounting. (Not sure how to build a docker image myself - it kept throwing several errors)
If the patch works we could still figure out why they disapear after. (Don't forget ENV STORAGE_S3_ENABLED=false) Default: Amazon S3 bucket, this env explicitly disables S3
currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.
If we map the path where the images are saved to a folder on the docker host, the images will be persisted (and stay available) - as I can't build the docker image, I created this pull request.
Does the patch work for you? Then I guess you could try the folder mapping. (Not sure how to build a docker image myself - it kept throwing several errors)
If the patch works we could still figure out why they disapear after. (Don't forget ENV STORAGE_S3_ENABLED=false) Default: Amazon S3 bucket, this env explicitly disables S3
The patch work correctly, with some caveat, i need to remove aws-sdk, docusaurus in package.json
. As it already in per workspace package.json
.
Now im waiting if some black magic image sudden disappear.
If you run a new docker image version it will - since the images are stored "within" the container.
With mounting the images are stored "outside" the container on the host.
If you run a new docker image version it will - since the images are stored "within" the container.
With mounting the images are stored "outside" the container on the host.
on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days.
Edit: working pretty good, lets hope your PR getting to master.
@AmruthPillai can you have a look?
If you run a new docker image version it will - since the images are stored "within" the container. With mounting the images are stored "outside" the container on the host.
on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days.
Edit: working pretty good, lets hope your PR getting to master.
Do you know how to convert the source code into working docker container(s)?
- I wasn't able to figure this out
If you run a new docker image version it will - since the images are stored "within" the container. With mounting the images are stored "outside" the container on the host.
on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days. Edit: working pretty good, lets hope your PR getting to master.
Do you know how to convert the source code into working docker container(s)?
* I wasn't able to figure this out
The official one now not working for you? I must admit that i dont use official Dockerfile so can't really tell if it will work. Mine is https://github.com/martadinata666/dockerized/blob/abf8805d23b8cdab69cfb167e1f57b37dd29e0e3/reactive-resume/Dockerfile.v3 , may that give you some gist, and i already do build in local with NODE_ENV=development, so the Dockerfile just fetching deps, pack and run it.
Made it to master & release 3.4.6 - Should be resolved with STORAGE_S3_ENABLED=false
can someone reconfirm that 3.6.4 break local storage picture? thanks
@martadinata666 Trying to recreate the issue locally and debugging now, will fix the issue asap :)
@martadinata666 Should be fixed in the next release: https://github.com/AmruthPillai/Reactive-Resume/releases/tag/v3.6.5
Now, you don't need any other flags. If you omit the STORAGE_BUCKET env, it would automatically store images on local storage.
@martadinata666 Should be fixed in the next release: https://github.com/AmruthPillai/Reactive-Resume/releases/tag/v3.6.5
Now, you don't need any other flags. If you omit the STORAGE_BUCKET env, it would automatically store images on local storage.
i see, just tried, it work correctly thanks for the fast response and fix. 👍🏼
Not setting STORAGE_BUCKET
will not work. It throws:
throw result.error;
^
ZodError: [
{
"code": "invalid_type",
"expected": "string",
"received": "undefined",
"path": [
"STORAGE_BUCKET"
],
"message": "Required"
}
]
How exactly should I configure it to use local storage?