storage
storage copied to clipboard
REGION and GLOBAL_S3_BUCKET are not required
when the storage backend is file
update the local hosting setup after that
Are these not required even if someone self-hosts supabase and wants to wire up a different storage backend? I was wondering how I can achieve this, is it even possible?
Encountered this on a seff hosted setup.
storage | finished migrations
storage | {"level":"info","time":"2023-08-02T08:17:32.759Z","pid":1,"hostname":"472de14ff501","msg":"Server listening at http://0.0.0.0:5000"}
storage | Server listening at http://0.0.0.0:5000
storage | {"level":"error","time":"2023-08-02T08:17:32.761Z","pid":1,"hostname":"472de14ff501","error":"{\"_error\":{\"errno\":-2,\"code\":\"ENOENT\",\"syscall\":\"mkdir\",\"path\":\"=var/lib/storage/stub\"},\"name\":\"Error\",\"message\":\"ENOENT: no such file or directory, mkdir '=var/lib/storage/stub'\",\"stack\":\"Error: ENOENT: no such file or directory, mkdir '=var/lib/storage/stub'\"}","msg":"uncaught exception"}
When using the supabase/supabase repo configs on the latest version (supabase/storage-api:v0.41.4).
ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
POSTGREST_URL=http://postgrest:3000
PGRST_JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
DATABASE_URL=postgres://postgres:postgres@postgres:5432/sarafu_network_db
FILE_SIZE_LIMIT=52428800
STORAGE_BACKEND=file
FILE_STORAGE_BACKEND_PATH:=var/lib/storage
TENANT_ID=stub
REGION=stub
GLOBAL_S3_BUCKET=stub
ENABLE_IMAGE_TRANSFORMATION=true
IMGPROXY_URL=http://imgproxy:5001
Since the stub value comes from the same env vars, could this be related?
This is now also being fixed