Unable to deploy on docker, wrong assertion teable can join itself on the same public url than users
Hi,
I try to deploy teable on a secure environment. Current hard assertion made for technical choice seems leading to not be able to use teable if not in a high specific deployment subcase.
For example CSV import require the application to be able to join itself and using the same PUBLIC_ORIGIN than user.
A simple docker traefik like this just broke this assertion.
teable:
image: teableio/teable:latest
depends_on: [traefik, postgres, redis]
environment:
PUBLIC_ORIGIN: http://teable.localhost
labels:
traefik.enable: true
traefik.http.services.teable.loadbalancer.server.port: 3000
traefik.http.routers.teable.rule: Host(`teable.localhost`)
Teable listen on localhost:3000, but is served throught traefik as reverse proxy on teable.localhost:80
When uploading a CSV, teable try to join teable.localhost:80, which is not garanteed to resolve, and in all case teable is not on 80 port but 3000. This leads to error:
request to http://teable.localhost/api/attachments/read/private/import/xxx?token=xxx&response-content-disposition=attachment;%20filename*=UTF-8%27%27foo.csv failed, reason: connect ECONNREFUSED 127.0.0.1:80
On real production, for security reason and isolation, teable will not be on a tcp socket (localhost:3000) but on a uds one (/run/teable/sock), with a nginx or caddy reverse proxy in front of it to really serve the content.
On secure environment, firewall will deny any intra-host communication, and so even binding to a tcp port is not a solution (require to also allow at least traffic from localhost:* to localhost:3000 which is no-go).
Even if firewall whitelist would be possible, teable public traffic would be load balanced to many teable instances by caddy or nginx, but each internal app would only hit themself, bypassing load balancing, IDS, etc.
Once again, there is no garanteed an app can join the public URL even in this case to go back through reverse proxy (VPN restriction, custom VPN DNS, hair pining (😰)…)
Did I miss something or is it not possible to deploy correctly teable with such constraints (app not able to join public url)?
A (partial) solution would be to at least be able to define a PRIVATE_URL for such internal traffic (in the docker traefik config above, PUBLIC_URL=http://teable.localhost with PRIVATE_URL=http://localhost:3000 would work)
The issue you're encountering stems from the PUBLIC_ORIGIN environment variable configuration in your Docker setup. The PUBLIC_ORIGIN is used by Teable to generate URLs for various functionalities, such as CSV import. If this is set to http://teable.localhost, it may not be accessible externally, leading to failures when the application tries to access itself.
To resolve this, consider the following steps:
-
Set
PUBLIC_ORIGINto an externally accessible URL: Ensure thatPUBLIC_ORIGINis set to a URL that is accessible both by users and the Teable application itself. For instance, if you're using Traefik and have a domain likehttp://teable.example.com, setPUBLIC_ORIGINto this URL. -
Configure Traefik to route requests appropriately: Ensure that Traefik is set up to route requests to the Teable container correctly. This includes setting the correct labels and ensuring that the domain used in
PUBLIC_ORIGINpoints to the Traefik router. -
Use Docker's internal networking for self-references: If Teable needs to access itself internally, consider using Docker's internal DNS. For example, you can set
PUBLIC_ORIGINtohttp://teable:3000(assumingteableis the service name), and ensure that this is resolvable within the Docker network. However, this URL won't be accessible externally, so you'll need to handle external access separately. -
Implement a reverse proxy for internal and external access: Set up a reverse proxy (like Nginx) that listens on a common domain and routes requests to the appropriate services. This way, both internal and external requests can use the same domain, satisfying the requirement for
PUBLIC_ORIGIN.
By ensuring that PUBLIC_ORIGIN is set to a domain accessible both internally and externally, and configuring your reverse proxy accordingly, you should be able to resolve the deployment issues you're facing.
This is exactly the point. PUBLIC_ORIGIN CAN’T be both internally and externally accessible. It's just literally impossible in many case. In my case in both local traefik configuration for deployment test and in production.
There is many way not NOT being able to have the same value for internal and external access:
- HTTPS handle on reverse proxy side require
PUBLIC_ORIGINwith a 443 port for external access but 80/3000/whatever for internal access - Listening on UDS require
https://PUBLIC_ORIGINfor external access (served by reverse proxy) butunix://for internal access - Having strong network isolation policy and firewalling just disallow any internal access, only reverse proxy allowed and possible only on UDS
- Hostname DNS resolution for external access would NOT be possible internally (for example in the case of VPN usage for users, using
PUBLIC_ORIGINteable.ts.laninternal tailscale resolver for the reverse proxy in DMZ and external acces, resolver not available on teable host because outside the VPN and the DMZ and so not resolvable for internal access)
You're absolutely right — requiring PUBLIC_ORIGIN to be both internally and externally accessible is unrealistic in many Docker + reverse proxy setups, especially with tools like Traefik in local/dev environments.
A possible workaround could be to:
Keep PUBLIC_ORIGIN set to the external-facing domain (e.g., https://teable.example.com)
Internally route requests (especially those made by the app to itself) through a proxy container or local DNS entry that resolves teable.example.com to the Docker service.
Alternatively, the app could ideally differentiate between an internal base URL (for internal fetches) and a public-facing one. Maybe a feature request could be made to support something like INTERNAL_ORIGIN vs PUBLIC_ORIGIN to properly handle these scenarios?
Yes, having two different parameters could be good I guess, with INTERNAL equal PUBLIC by default.
I just wonder how this would be possible for UDS internal access, but for such config, I guess I need one internal endpoint on reverse proxy too for that.
One other I could propose is to Introduce support for two environment variables:
PUBLIC_ORIGIN → for generating links or URLs exposed to users (e.g. in emails, CSV import)
INTERNAL_ORIGIN (optional) → used for internal service calls made by Teable to itself
If INTERNAL_ORIGIN is not set, it should default to PUBLIC_ORIGIN (to maintain backward compatibility).
##Benefits
**Solves DNS and routing issues in Docker and Kubernetes environments
**Enables clean separation of internal and external network concerns
Supports production-grade setups without workarounds like custom DNS or internal proxies
Just for information, is there any real interest of such internal traffic? Seems to me more efficient and easy to just call directly the corresponding code from the app than issuing a HTTP request, no?
In theory, direct function calls would be faster and simpler. But many apps (maybe including Teable) use internal HTTP requests to keep things modular, reuse the same API logic, or prepare for a microservices setup later. Still, if it's just local logic, replacing those with direct calls could improve performance and simplify deployment — definitely worth considering.