Reactive-Resume icon indicating copy to clipboard operation
Reactive-Resume copied to clipboard

[BUG] Photo upload fails

Open sunny5055 opened this issue 2 years ago • 25 comments

Describe the bug Picture upload fails to new resume

Product Flavor

  • [Yes] Self Hosted

To Reproduce Create a new resume Upload a photo

Expected behavior Photo to be uploaded and displayed

Desktop (please complete the following information):

  • OS: Docker latest version
  • Browser Firefox
  • Version 100

Additional context request sent to /api/resume/3/photo Give response 500 Internal Server Error

sunny5055 avatar Apr 10 '22 14:04 sunny5055

You might need to update your ENVs to have proper S3 credentials. Have they been added?

AmruthPillai avatar Apr 11 '22 05:04 AmruthPillai

Could you provide a sample of s3 credentials? I've added my gateway and bucket name but still not working - confirmed through terminal access/read/write of s3 bucket.

melbadry97 avatar Apr 11 '22 14:04 melbadry97

You might need to update your ENVs to have proper S3 credentials. Have they been added?

have gone through the documentation but didnt notice about this. May be recently added.. is there is way to store locally instead of S3 ?

sunny5055 avatar Apr 11 '22 15:04 sunny5055

Could you provide a sample of s3 credentials? I've added my gateway and bucket name but still not working - confirmed through terminal access/read/write of s3 bucket.

Right, so after a bit of tinkering, the following configuration seems to be a good example:

      STORAGE_BUCKET=MY_BUCKET_NAME
      STORAGE_REGION=eu-central-1
      STORAGE_ENDPOINT=https://s3.eu-central-1.amazonaws.com/
      STORAGE_URL_PREFIX=https://MY_BUCKET_NAME.s3.eu-central-1.amazonaws.com/
      STORAGE_ACCESS_KEY=IAMUSERACCESSKEY
      STORAGE_SECRET_KEY=IAMUSERSECRETKEY

Note that the region needs to match the region of the bucket.

In my case I've created an IAM user, and used the credentials provided during generation as access key and secret. I've configured the bucket to allow public access. Does this help?

I'm still with sunny5055 though, I'm not very happy with being dependent on a cloud provider. I would rather see a choice between locally hosted and some storage bucket on another provider.

YuriMB avatar Apr 12 '22 10:04 YuriMB

@modem7 - I don't understand how you were able to use local storage. It's hardcoded to use the S3 client.

I would prefer local storage anyway..

dvd741-a avatar Apr 13 '22 12:04 dvd741-a

@modem7 - I don't understand how you were able to use local storage. It's hardcoded to use the S3 client.

I would prefer local storage anyway..

I think you might be right.

I used to be able to use local storage, but it seems in the latest version, whilst it's able to use what I had before, it doesn't allow me to upload new images now.

I would say that if reactive-resume is going "self-hosted", every part should be self-hosted, with zero reliance on 3rd party storage or APIs.

I certainly won't be using S3, and if that's the only solution, that'll be me out unfortunately, especially as I'm only hosting it for myself.

modem7 avatar Apr 13 '22 12:04 modem7

That's literally doesn't make any sense to have S3 cloud as part of requirements. I'm trying to deploy this for like a month already and still no luck. Firstly it was because of outdated YAML parser that portainer uses, secondly this weird requirement of S3 cloud.

How much longer will it take to deploy app like this?

BTW: i searched whole documentation of reactive resume and didn't find any storage related var.

Pheggas avatar Apr 16 '22 09:04 Pheggas

Had issues deploying the self hosted docker following the instructions on the tutorial. Ran into issues related to the server not starting up correctly and it was due to the the S3 parameters that are not mentioned at all anywhere. I see no point in labeling this as selfhostable if it is still dependent on external parties for crucial features. Photo uploads do not work at all in the standalone version without S3 parameters. The default should be to use the fill system available to the server which can easily be mapped by the installer.

kgotso avatar Apr 16 '22 16:04 kgotso

That's literally doesn't make any sense to have S3 cloud as part of requirements. I'm trying to deploy this for like a month already and still no luck. Firstly it was because of outdated YAML parser that portainer uses, secondly this weird requirement of S3 cloud.

Please understand that a growing app like this can have it's issues with fast-and-loose development practices. I am trying my best to keep it as simple, but also working as much as I can.

The YAML anchors issue was resolved later as I removed them and reverted back to adding ENV_VARS in docker's environment array directly. The reason I had to add S3 as a requirement was because when users (even on self hosted) were uploading their images, the previous logic used to store these files locally. But because of the way I have CI/CD set up, a new instance is spun-up with the old one being discarded. This means, all old files on the filesystem also get deleted. So I had to move files to a non-ephemeral FS, hence DigitalOcean Spaces (otherwise S3).

I do hope to make S3 an optional requirement, and once I figure that out, will do what is required to make it simpler.

AmruthPillai avatar Apr 30 '22 09:04 AmruthPillai

Can't you just store the files in a mapped volume/directory? This way they would be stored independently from the instance

dvd741-a avatar Apr 30 '22 10:04 dvd741-a

@AmruthPillai Local mount storage would not get overwritten.

The image should point to an internal directory for images, which we can overwrite with a bind/volume mount.

That's typically how Docker works, the images themselves are ephemeral, but local storage is not.

modem7 avatar May 29 '22 01:05 modem7

While I managed to use a free tier scaleway bucket I also fail to see why S3 compatible is required, when locally mounted host folder or named docker volumes would allow permanent storage, very much compatible with a CI pipeline.

gymnae avatar May 31 '22 07:05 gymnae

Basically s3 in attempt to solve some weird problem, https://github.com/AmruthPillai/Reactive-Resume/issues/818 and this will be good if the s3 work with self hosted s3 like minio.

martadinata666 avatar May 31 '22 13:05 martadinata666

Basically s3 in attempt to solve some weird problem, https://github.com/AmruthPillai/Reactive-Resume/issues/818 and this will be good if the s3 work with self hosted s3 like minio.

Sure, but given how heavy reactive resume already is with three containers, adding a fourth is not the direction this should head in, especially given docker volumes and bind mounts exist for this exact reason.

modem7 avatar May 31 '22 13:05 modem7

Well, if the reactive-resume store image/assets correctly, we may already use volume/bind mounts right now, but seems it not as simple as it should be, anywhere this head in, we just hope it will work for self hosted solution.

martadinata666 avatar May 31 '22 13:05 martadinata666

https://github.com/AmruthPillai/Reactive-Resume/pull/906

Once this gets approved & merged, add environmental variable: STORAGE_S3_ENABLED=false

dvd741-a avatar Jun 06 '22 18:06 dvd741-a

currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.

martadinata666 avatar Jun 07 '22 05:06 martadinata666

currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.

If we mount the path (see documentation: https://docs.docker.com/storage/bind-mounts/) where the images are saved to a folder on the docker host, the images will be persisted (and stay available) - as I can't build the docker image, I created this pull request.

Does the patch work for you? Then I guess you could try the folder mounting. (Not sure how to build a docker image myself - it kept throwing several errors)

If the patch works we could still figure out why they disapear after. (Don't forget ENV STORAGE_S3_ENABLED=false) Default: Amazon S3 bucket, this env explicitly disables S3

dvd741-a avatar Jun 07 '22 06:06 dvd741-a

currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.

If we map the path where the images are saved to a folder on the docker host, the images will be persisted (and stay available) - as I can't build the docker image, I created this pull request.

Does the patch work for you? Then I guess you could try the folder mapping. (Not sure how to build a docker image myself - it kept throwing several errors)

If the patch works we could still figure out why they disapear after. (Don't forget ENV STORAGE_S3_ENABLED=false) Default: Amazon S3 bucket, this env explicitly disables S3

The patch work correctly, with some caveat, i need to remove aws-sdk, docusaurus in package.json. As it already in per workspace package.json.

Now im waiting if some black magic image sudden disappear.

martadinata666 avatar Jun 07 '22 06:06 martadinata666

If you run a new docker image version it will - since the images are stored "within" the container.

With mounting the images are stored "outside" the container on the host.

dvd741-a avatar Jun 07 '22 06:06 dvd741-a

If you run a new docker image version it will - since the images are stored "within" the container.

With mounting the images are stored "outside" the container on the host.

on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days.

Edit: working pretty good, lets hope your PR getting to master.

martadinata666 avatar Jun 07 '22 07:06 martadinata666

@AmruthPillai can you have a look?

dvd741-a avatar Jun 11 '22 10:06 dvd741-a

If you run a new docker image version it will - since the images are stored "within" the container. With mounting the images are stored "outside" the container on the host.

on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days.

Edit: working pretty good, lets hope your PR getting to master.

Do you know how to convert the source code into working docker container(s)?

  • I wasn't able to figure this out

dvd741-a avatar Jun 11 '22 11:06 dvd741-a

If you run a new docker image version it will - since the images are stored "within" the container. With mounting the images are stored "outside" the container on the host.

on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days. Edit: working pretty good, lets hope your PR getting to master.

Do you know how to convert the source code into working docker container(s)?

* I wasn't able to figure this out

The official one now not working for you? I must admit that i dont use official Dockerfile so can't really tell if it will work. Mine is https://github.com/martadinata666/dockerized/blob/abf8805d23b8cdab69cfb167e1f57b37dd29e0e3/reactive-resume/Dockerfile.v3 , may that give you some gist, and i already do build in local with NODE_ENV=development, so the Dockerfile just fetching deps, pack and run it.

martadinata666 avatar Jun 11 '22 11:06 martadinata666

Made it to master & release 3.4.6 - Should be resolved with STORAGE_S3_ENABLED=false

dvd741-a avatar Jun 20 '22 14:06 dvd741-a

can someone reconfirm that 3.6.4 break local storage picture? thanks

martadinata666 avatar Aug 29 '22 18:08 martadinata666

@martadinata666 Trying to recreate the issue locally and debugging now, will fix the issue asap :)

AmruthPillai avatar Aug 29 '22 18:08 AmruthPillai

@martadinata666 Should be fixed in the next release: https://github.com/AmruthPillai/Reactive-Resume/releases/tag/v3.6.5

Now, you don't need any other flags. If you omit the STORAGE_BUCKET env, it would automatically store images on local storage.

AmruthPillai avatar Aug 29 '22 18:08 AmruthPillai

@martadinata666 Should be fixed in the next release: https://github.com/AmruthPillai/Reactive-Resume/releases/tag/v3.6.5

Now, you don't need any other flags. If you omit the STORAGE_BUCKET env, it would automatically store images on local storage.

i see, just tried, it work correctly thanks for the fast response and fix. 👍🏼

martadinata666 avatar Aug 29 '22 19:08 martadinata666

Not setting STORAGE_BUCKET will not work. It throws:

        throw result.error;
        ^

ZodError: [
  {
    "code": "invalid_type",
    "expected": "string",
    "received": "undefined",
    "path": [
      "STORAGE_BUCKET"
    ],
    "message": "Required"
  }
]

How exactly should I configure it to use local storage?

rodrigogonegit avatar Nov 26 '23 17:11 rodrigogonegit