fake-gcs-server
fake-gcs-server copied to clipboard
Uploaded files are not accessible by their URL
Hi, I'm running the GCS emulator following the instructions in README.
I created the following directory structure:
storage
|-->client-assets
|-->some_file.txt
I run the emulator using the following command:
docker run -d --name fake-gcs-server -p 4443:4443 -v ${PWD}/storage:/data fsouza/fake-gcs-server -scheme http -public-host localhost:4443
Everething works great, the bucket is created and I can see some objects there, as expected:
curl http://localhost:4443/storage/v1/b
{"kind":"storage#buckets","items":[{"kind":"storage#bucket","id":"client-assets","name":"client-assets","versioning":{},"timeCreated":"0001-01-01T00:00:00Z","location":"US-CENTRAL1"}]}
{"kind":"storage#objects","items":[{"kind":"storage#object","name":"some_file.txt","id":"client-assets/some_file.txt","bucket":"client-assets","size":"10","contentType":"text/plain; charset=utf-8","crc32c":"IW9Zqw==","md5Hash":"fyq6ukIwYcUJ9JI90Ets8Q==","timeCreated":"2022-07-18T13:37:53.746639Z","timeDeleted":"0001-01-01T00:00:00Z","updated":"2022-07-18T13:37:53.746656Z","generation":"1658151473746739"}]}
When I connect to the docker container via terminal I can see the files in /storage
folder as well.
But when I try to access objects directly via URL, I get Not Found
status.
curl http://localhost:4443/client-assets/some_file.txt
Not Found
What am I doing wrong?
took me a while too:
http://localhost:4443/storage/v1/b/<BUCKET_NAME>/o/<OBJECT_NAME>?alt=media
might not need the alt=media
for text. i've only been using it for images
@markvital that should work (${scheme}://${public-host}/${bucket}/${object}
points to the downloadObject handler). Can you share all the logs from the server process?
(apologies for the delayed response, I was out on a long break, but I'm back, so you can expect faster responses now!)
I have the same problem.
I hosted this Fake GCS Server on my Raspberry Pi in my local network, so the URL is a bit different. Instead of localhost:4443
I am going to use 192.168.1.2:4443
. When I try to access http://192.168.1.2:4443/storage/v1/<BUCKET_NAME>/<OBJECT_NAME>
, I got the same Not Found
error.
I have to edit the URL to be like this format: http://192.168.1.2:4443/storage/v1/b/<BUCKET_NAME>/o/<OBJECT_NAME>
for it to display the correct object metadata. This is just rendering the metadata! As I'm storing images, I have to add a suffix: ?alt=media
to make it render in my web browser.
In short, to display an image stored to this Fake GCS Server, I have to use the following URL format:
http://192.168.1.2:4443/storage/v1/b/<BUCKET_NAME>/o/<OBJECT_NAME>?alt=media
I have not yet tested about the right URL format to display images from the real Google Cloud Storage, but the last time I checked it, it's still in the same format: https://storage.googleapis.com/storage/v1/
(this was a long time ago, but let me check again later).
Gotcha. In that case you need to pass -public-host 192.168.1.2:4443
to the process. fake-gcs-server validates the host too for those "public" URLs.
Thank you for your reply! I have tested with -public-host 192.168.1.2:4443
, but it seems that the problem is not yet fixed. It still returns Not Found
upon accessing the expected URL.
I still have to add an o
before the object name, so it's still http://192.168.1.2:4443/storage/v1/b/<BUCKET>/o/<OBJECT>
. My expected URL is actually http://192.168.1.2:4443/storage/v1/b/<BUCKET>/<OBJECT>
(without the /o/
addition), so I can instantly access the image in my browser.
My question, is adding of /o/
an expected behavior, or is it a side effect from somewhere?
Thank you very much for your kind help.
I still have to add an
o
before the object name, so it's stillhttp://192.168.1.2:4443/storage/v1/b/<BUCKET>/o/<OBJECT>
. My expected URL is actuallyhttp://192.168.1.2:4443/storage/v1/b/<BUCKET>/<OBJECT>
(without the/o/
addition), so I can instantly access the image in my browser.
Is that URL valid in the GCS API?
Once you set -public-host 192.168.1.2:4443
, the URL http://192.168.1.2:4443/<BUCKET>/<OBJECT>
should work.
As far as I understand:
-
http://192.168.1.2:4443/storage/v1/b/<BUCKET>/o/<OBJECT>
is the API URL, which works in fake-gcs-server, but in real GCS would require authentication (replacing 192.168.1.2:4443 with storage.googleapis.com) -
http://192.168.1.2:4443/<BUCKET>/<OBJECT>
is the public URL for the object, which works in fake-gcs-server and in GCS (as long as the bucket and/or object is public - and also replacing 192.168.1.2:4443 with storage.googleapis.com). -
http://192.168.1.2:4443/storage/v1/b/<BUCKET>/<OBJECT>
is not a valid URL
Ahhh, I see. I just found out about it by reading your answer and looking at Google Cloud Storage's documentation. It appears that my expected URL (http://192.168.1.2:4443/storage/v1/b/<BUCKET>/<OBJECT>
) is not a valid URL. It appears that I made a mistake in thinking that was a valid URL! Sorry!
I fixed it and have executed the Docker image with -public-host 192.168.1.2:4443
and the image shows perfectly in my browser by following your second URL (http://192.168.1.2:4443/<BUCKET>/<OBJECT>
).
Once again, thank you for your kind help, @fsouza!
For the people after me who are maybe planning to run this via a Docker Compose, here's my configuration:
version: "3.9"
services:
gcs:
container_name: googlecloudstorage
image: fsouza/fake-gcs-server
command: -scheme http -public-host 192.168.1.2:4443 # Replace this with your public host URL.
ports:
- 4443:4443
volumes:
- ./dev-data/gcs-data:/storage
postgres:
container_name: postgres
image: postgres:14.3
ports:
- 5432:5432
volumes:
- ./testdata/postgres-data:/var/lib/postgresql
environment:
POSTGRES_HOST_AUTH_METHOD: trust
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 30s
timeout: 20s
retries: 3
volumes:
gcs-data:
postgres-data:
Hopefully this helps anyone who encountered the same issue!
Thanks for sharing your setup. I'll close the issue, but please let me know if we should keep it open or feel free to open another one in the future if you run into any other issues!
@markvital that should work (
${scheme}://${public-host}/${bucket}/${object}
points to the downloadObject handler). Can you share all the logs from the server process?(apologies for the delayed response, I was out on a long break, but I'm back, so you can expect faster responses now!)
No problem, it actually worked for me after I deleted and restarted docker container. Thank you so much for helping 🙏