hostname-admin broken with docker and reverse proxy
Describe the bug
The hostname-admin should enable to have a second hostname for a keycloak instance through which we can connect to the admin console. Unfortunately, after trying a lot of various combinations of keycloak settings (such as KC_HOSTNAME_ADMIN, KC_HOSTNAME_ADMIN_URL ...), I'm still unable to connect to the admin console through the hostname_admin defined.
Note that keycloak is running in a docker image and is behind two different reverse proxies:
- one for the public access
- one the management access (admin console only)
Please, see below about how to reproduce.
Version
19.0.2
Expected behavior
Given the following keycloak configuration:
KC_HTTP_RELATIVE_PATH: "/auth"
KC_HOSTNAME_URL: "https://host1.local:8443/auth/"
KC_HOSTNAME_ADMIN_URL: "https://host2.local:7443/auth/"
KC_PROXY: "edge"
When I go to the url: https://host2.local:7443/auth/admin/master/console/ I should be able to authenticate and access the admin console WITHOUT being redirected to https://host1.local:8443/auth/
Actual behavior
Given the following keycloak configuration:
KC_HTTP_RELATIVE_PATH: "/auth"
KC_HOSTNAME_URL: "https://host1.local:8443/auth/"
KC_HOSTNAME_ADMIN_URL: "https://host2.local:7443/auth/"
KC_PROXY: "edge"
When I go to the url: https://host2.local:7443/auth/admin/master/console/ I am redirected to the url https://host1.local:8443/auth/ in order to authenticate.
Even worse, if a front end url is defined for the realm master (the realm used to access the admin console) with the value https://host2.local:7884/auth, I am redirected to https://host2.local:8443/auth (good host, wrong port!!)
We can see that in the iframe loaded (see authServerUrl variable ):
<script id="environment" type="application/json">
{
"loginRealm": "master",
"authServerUrl": "https://host2.local:8443/auth",
"authUrl": "https://host2.local:7443/auth",
"consoleBaseUrl": "/auth/admin/master/console/",
"resourceUrl": "/auth/resources/up520/admin/keycloak.v2",
"masterRealm": "master",
"resourceVersion": "up520",
"commitHash": "dd67b3b3a4e80031d32fdf0ffd9e9d450a657d07",
"isRunningAsTheme": true
}
</script>
How to Reproduce?
I wrote a docker-compose.yml file so that we could reproduce the error and try to guess what could be the right combination of keycloak settings so that the admin url could work.
See the docker-compose below:
version: '3.3'
services:
db-master:
image: postgres:14.5
ports:
- 5432:5432/tcp
environment:
- POSTGRES_PASSWORD=password
volumes:
- pg14-master-volume:/var/lib/postgresql/data
networks:
- pg_pgdb
keycloak:
image: quay.io/keycloak/keycloak:19.0.2
hostname: keycloak
labels:
- "traefik.http.routers.keycloak-front.rule=PathPrefix(`/auth`)"
- "traefik.http.routers.keycloak-front.entrypoints=web"
- "traefik.http.routers.keycloak-front.tls=true"
- "traefik.http.routers.keycloak-front.service=keycloak-back"
- "traefik.http.services.keycloak-back.loadbalancer.server.scheme=http"
- "traefik.http.services.keycloak-back.loadbalancer.server.port=8080"
- "traefik.http.services.keycloak-back.loadbalancer.passhostheader=true"
- "traefik.enable=true"
ports:
- "9191:8080/tcp"
- "8787:8787/tcp"
command:
- start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
KC_DB: postgres
KC_DB_USERNAME: ukeycloak
KC_DB_PASSWORD: password
KC_DB_URL: jdbc:postgresql://db-master:5432/keycloak
KC_LOG_LEVEL: INFO
KC_HOSTNAME_STRICT: "false"
# KC_HOSTNAME: "host1.local"
# KC_HOSTNAME_PORT: "8443"
KC_HTTP_RELATIVE_PATH: "/auth"
KC_HOSTNAME_URL: "https://host1.local:8443/auth/"
KC_HOSTNAME_ADMIN_URL: "https://host2.local:7443/auth/"
KC_PROXY: "edge"
networks:
- privatezone
- pg_pgdb
traefikpub:
image: traefik:2.8.5
ports:
- "8443:8443/tcp"
- "8080:8080/tcp"
networks:
- privatezone
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:8443"
- "--entryPoints.web.forwardedHeaders.insecure=true"
- "--accesslog=true"
- "--api.dashboard=true"
- "--providers.docker.network=localdocker_privatezone"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
traefikpriv:
image: traefik:2.8.5
ports:
- "7443:7443/tcp"
- "7080:8080/tcp"
networks:
- privatezone
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:7443"
- "--entryPoints.web.forwardedHeaders.insecure=true"
- "--accesslog=true"
- "--api.dashboard=true"
- "--providers.docker.network=localdocker_privatezone"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
publiczone:
driver: bridge
privatezone:
driver: bridge
pg_pgdb:
driver: bridge
volumes:
pg14-master-volume:
driver: local
This docker-compose is made of:
- one postgres instance (needed for keycloak)
- one keycloak instance version 19.0.2
- a reverse proxy traefikpub that listen to port 8443 with https (with a self-signed certificate)
- a reverse proxy called traefikpriv that listen to port 7443 with https (with a self-signed certificate)
For this test to work, you still need to define two hostnames in your /etc/hosts as follow:
127.0.0.1 localhost host1.local host2.local
To start all the components defined in this file, run:
docker compose up -d
For keycloak to start, you will have to define a db user and a db schema. Just run the following commands to create both user and schema. Be sure to enter password each time a password is required to the prompt:
createuser -c 20 -D -E -l -S -R -i -h localhost -p 5432 -U postgres -W ukeycloak -P
createdb -h localhost -p 5432 -U postgres -W -O ukeycloak -E utf-8 keycloak
Once the db use is created, just restart the compose to be sure keycloak starts without issue.
docker compose down
docker compose up -d
At this time, you should be able to go the urls:
https://host1.local:8443/auth
for the public part and:
https://host2.local:7443/auth
for the internal admin part.
With these settings, the public part works as expected. To try it, assuming you have a realm called public, you can go to the url
https://host1.local:8443/auth/realms/public/account
and authenticate with a user defined on the public realm
Now, for the admin part, to reproduce the issue, please follow these steps:
Case 1:
- ensure no front-end url is defined for the master realm
- go to url
https://host2.local:7443/auth - click on administration console link
Expected: we should remain on https://host2.local:7443 url in order to authenticate
Actual: we are redirected to https://host1.local:8443/auth . If we authenticate with a user defined in the master realm, we are indeed redirected to the admin console at the right url https://host2.local:7443/auth/
Now if we change the configuration of the master realm to add a front-end url, here are the steps to reproduce the bug:
Case 2:
- ensure the master realm has a front-end url defined with the value: https://host2.local:7443/auth
- go to url
https://host2.local:7443/auth - click on administration console link
Expected : the login page should be displayed on the url https://host2.local:7443/auth
Actual: the login page hung. if we look at the source code the login page, we can see:
<script id="environment" type="application/json">
{
"loginRealm": "master",
"authServerUrl": "https://host2.local:8443/auth",
"authUrl": "https://host2.local:7443/auth",
"consoleBaseUrl": "/auth/admin/master/console/",
"resourceUrl": "/auth/resources/up520/admin/keycloak.v2",
"masterRealm": "master",
"resourceVersion": "up520",
"commitHash": "dd67b3b3a4e80031d32fdf0ffd9e9d450a657d07",
"isRunningAsTheme": true
}
</script>
the login page hung because the url defined in authServerUrl is plain wrong (the port 8443 is used for the public part while the host host2.local is used for the admin part). So there is a cookie mismatch because the host is wrong and login page hung.
Anything else?
No response
You can expose the admin endpoint/console through a separate URL, but the "admin console/endpoints" in Keycloak doesn't have their own login, that is delegated to the main Keycloak URLs, hence why you are seeing that you are redirected when you are doing a login to the admin console. You should consider the admin console as somewhat of a separate application, and you get that same behaviour when you are logging in to any application.
Thank you very much @stianst for your quick and very clear answer.
Now I understand that redirecting to the public facing login page in order to authenticate with the admin account to the admin console is the intended behavior when using KC_HOSTNAME and KC_HOSTNAME_ADMIN settings.
Now, I must admit that I'm a little bit confused with this behavior :
- First of all, that's precisely what we don't want: using the admin password on the public url. To me, it defeats the whole purpose of having a dedicated admin url...
- I think this approach does not work with what is recommended in the reverse proxy configuration / exposed path recommendations
By changing the configuration of keycloak and removing both KC_HOSTNAME_URL and KC_HOSTNAME_ADMIN_URL, I was able to get the wanted behavior:
command:
- start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
KC_DB: postgres
KC_DB_USERNAME: ukeycloak
KC_DB_PASSWORD: password
KC_DB_URL: jdbc:postgresql://db-master:5432/keycloak
KC_LOG_LEVEL: INFO
KC_HOSTNAME_STRICT: "false"
# KC_HOSTNAME: "host1.local"
# KC_HOSTNAME_PORT: "8443"
KC_HTTP_RELATIVE_PATH: "/auth"
# KC_HOSTNAME_URL: "https://host1.local:8443/auth/"
# KC_HOSTNAME_ADMIN_URL: "https://host2.local:7443/auth/"
KC_PROXY: "edge"
Now I can authenticate to public realms using the public domain and I can fully authenticate to the admin console with the internal only url (without being redirected to the public url) so the admin password will never transit through the public proxy/urls. Furthermore, I can configure the public proxy to respect the exposed path recommendations and thus, I should be able to have a secure deployment for my keycloak instance. Do you confirm the approach?
Unless I'm missing something, I could not really recommend using the KC_HOSTNAME_URL and KC_HOSTNAME_ADMIN_URL as production ready settings... I wonder what could be the use cases for these settings if it's not for security purpose?
By disabling strict hostname checking you are opening up to another vulnerability though (host header injection) - if you want to do that you need to enforce it in the reverse proxy to make sure a malicious host header is not sent to Keycloak.
There are other ways to mitigate risk of admin password being leaked, for one you would want to make sure the main Keycloak server is exposed in a secure way, secondly there's OTP or WebAuthn to consider.
Blocking the admin endpoints/console from the public internet is more about mitigating risk if there are some vulnerabilities in the admin endpoints. In reality though that is somewhat of a false sense of security, as if there are some vulnerabilities you are not safe from internal attackers, or external attackers that somehow gets into your DZ.
If you still want to do what you are trying to do and fully block it of, then you'd really need to have dedicated nodes for the admin parts. Run some nodes that are exposed to the internet and disable admin endpoints/console on those nodes (see features guide), and run some separate nodes for admin endpoints.
I could potentially see that we could support it directly though, but that would require some way of configuring "logging for admin console" to use the "admin urls" or something like that, which then configures the oidc endpoints used by the admin console/endpoints, including the issuer urls and such.
@stianst I'm in the same situation as @Nowheresly. I want to expose the public clients but not the admin console. I also went the hostname and hostname-admin path as the documentation states, and realized it's not going to do what i thought it would do. Ideally if i could bind the admin access to something like an interface, that would be useful. I could then possibly add it to my OOB management network.
Until i find a way to do something like that, i setup my reverse proxy to expose the required paths (realms, etc) to the public, and i require both host verification and remote_ip verification to allow access to the admin console. Beyond setting up an admin node (which i haven't looked at yet so no idea what that involves), do you have any additional recommendations or thoughts about how i set this up?
Thanks again @stianst for your quick reply. This is really appreciated!
By disabling strict hostname checking you are opening up to another vulnerability though (host header injection) - if you want to do that you need to enforce it in the reverse proxy to make sure a malicious host header is not sent to Keycloak.
Sure, the reverse proxy in front of keycloak (or in front of any other application) must ensure the X-Forwarded-* headers are not malicious. The traefik reverse proxy used in my docker-compose file above provides a way to secure these headers.
In my example above, I disabled this check
- "--entryPoints.web.forwardedHeaders.insecure=true"
but this is certainly not a production setting...
So to my understanding, if we don't have a way to secure at reverse proxy level these X-Forwarded-* headers, we must use hostname and hostname-admin that keycloak now provides.
There are other ways to mitigate risk of admin password being leaked, for one you would want to make sure the main Keycloak server is exposed in a secure way, secondly there's OTP or WebAuthn to consider.
Sure.
Blocking the admin endpoints/console from the public internet is more about mitigating risk if there are some vulnerabilities in the admin endpoints. In reality though that is somewhat of a false sense of security, as if there are some vulnerabilities you are not safe from internal attackers, or external attackers that somehow gets into your DZ.
Our keycloak instance uses a public realm and the default master realm. The admin-cli and admin-console are only enabled for the master realm. Since the realm appears in the url, it's really easy to prevent access and monitor who is trying to access the master realm from the public internet.
I understand this approach may lead to a "false sense of security", yet I doubt our security team will accept getting rid of that in favor of the additional security provided by hostname-* settings if the admin user at the end must login from the public endpoint, not from the admin endpoint as of now... And I don't want to spend my nights trying to convince them ;)
So I don't think we will use the hostname-* settings in order to get the previous behavior (ie: we can connect to the admin console only through internal urls), and yes we understand that it means we have to secure our DMZ from both internal and external attackers.
So I think the case 1 can definitely be closed since now I understand thank to your explanations that it works as expected. Maybe it should be reflected somewhere in the documentation that setting a hostname-admin url still requires to authenticate by means of the hostname url in order to reach the admin console?
As for the case 2 (ie: use of hostname, hostname-admin and front-end url for the realm), I'm not sure what needs to be done ?
I could potentially see that we could support it directly though, but that would require some way of configuring "logging for admin console" to use the "admin urls" or something like that, which then configures the oidc endpoints used by the admin console/endpoints, including the issuer urls and such.
I agree with that. Being able to log in to the admin console only through the hostname-admin url is the behavior we want and probably the behavior most people expect. This could help us not relying only to the securization of our DMZ.
I'm still not convinced we really need to have different login URLs for the admin console/endpoint.
Let's summarise how it works today and what you can do today:
- Expose OIDC endpoints at say
https://public-domain.org/kc-oidc - Expose admin console/endpoints on some internal URL, let's say
https://internal-url/kc-admin
When an admin logs in to https://internal-url/kc-admin the admin is redirected to https://public-domain.org/kc-oidc for login.
An external attacker is in theory able to get a token to the admin console/endpoints, but won't be able to access them since they are not exposed externally.
Let's for argument say that https://internal-url/kc-admin uses the internal URL to login/get tokens. This doesn't add any additional security at all. In either case the attacker would have to be able to access https://internal-url/kc-admin to invoke the admin endpoints.
There are better and more elegant ways we can add additional security around the admin endpoints than to use different login URLs for the admin endpoints than for other applications, like:
- Require stronger authentication, like WebAuthN or OTP
- Potentially have some IP address allow-list to login to the admin console/endpoint (a bit old school approach to security though)
Although I'm not completely against the idea of being able to configure a different "login URL" for the admin endpoints, but that would be a enhancement/feature request rather than a bug for sure.