authentik
authentik copied to clipboard
Outpost Integration - SSH Docker URL fails with error => [Errno -2] Name or service not known
Describe the bug
When setting up an Outpost Integration of type "Docker Service-Connection", defining a valid DNS hostname to connect to (ex: ssh://[email protected]) fails and returns error [Errno -2] Name or service not known. Updating the hostname to the associated IP and changing nothing else results in a working connection.
The Local box was unchecked and the appropriate certificate keypair (tested and working with a manual SSH connection from the Authentik server to the target) had been added to Authentik and selected. The relevant Provider+Application settings had already been set up and configured correctly.
I suspect the issue might be because there is a local DNS server on my network that is responsible for the target's DNS record, and the Docker container that attempts the connection is unaware of that and instead using a typical public DNS resolver. That would explain why I could establish a connection manually using the hostname on the Authentik server, but only be able to create the Outpost Integration successfully using the IP.
If that is the indeed the case, then this issue is very limited in spread. And considering the IP at least works, not a blocking issue. Although it might be helpful to add a note to the documentation for Outpost Integrations about trying the IP in the event that a DNS hostname doesn't work.
Logs There were very little logs of worth from the docker compose logs. The only relevant logs that I could find were system task exception alerts from within the Events->Logs page, modified for readability and copied below.
Task outpost_controller encountered an error: Traceback (most recent call last):
File \"/usr/local/lib/python3.10/site-packages/celery/app/trace.py\", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File \"/usr/local/lib/python3.10/site-packages/celery/app/trace.py\", line 734, in __protected_call__
return self.run(*args, **kwargs)
File \"/authentik/outposts/tasks.py\", line 132, in outpost_controller
with controller_type(outpost, outpost.service_connection) as controller:
File \"/authentik/providers/ldap/controllers/docker.py\", line 11, in __init__
super().__init__(outpost, connection)
File \"/authentik/outposts/controllers/docker.py\", line 96, in __init__
self.client = DockerClient(connection)
File \"/authentik/outposts/controllers/docker.py\", line 59, in __init__
super().__init__(
File \"/usr/local/lib/python3.10/site-packages/docker/client.py\", line 45, in __init__
self.api = APIClient(*args, **kwargs)
File \"/usr/local/lib/python3.10/site-packages/docker/api/client.py\", line 171, in __init__
self._custom_adapter = SSHHTTPAdapter(
File \"/usr/local/lib/python3.10/site-packages/docker/transport/sshconn.py\", line 177, in __init__
self._connect()
File \"/usr/local/lib/python3.10/site-packages/docker/transport/sshconn.py\", line 223, in _connect
self.ssh_client.connect(**self.ssh_params)
File \"/usr/local/lib/python3.10/site-packages/paramiko/client.py\", line 340, in connect
to_try = list(self._families_and_addresses(hostname, port))
File \"/usr/local/lib/python3.10/site-packages/paramiko/client.py\", line 203, in _families_and_addresses
addrinfos = socket.getaddrinfo(
File \"/usr/local/lib/python3.10/socket.py\", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
[Errno -2] Name or service not known"
Version and Deployment (please complete the following information):
- authentik version: 2022.6.2
- Deployment: docker-compose
Docker should by default use the system's resolver (via a proxy that resolves the container names), and authentik doesn't overwrite that behaviour. You can exec into the container and use curl to test if that can resolve the hostname
Looks like I am having a similar issue at least on Unraid. Both authentik and worker are in a privileged state, both have the /var/run/docker.sock variable passed, however state of the outpost integration is 'unhealthy'.
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 314, "request_id": "task-199687cd20ab484e8251341d91535764", "task_id": "199687cd-20ab-484e-8251-341d91535764", "task_name": "outpost_service_connection_state", "timestamp": "2022-08-14T01:17:43.329967"} {"event": "Task failure", "exc": "DockerException(\"Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))\")", "level": "warning", "logger": "authentik.root.celery", "pid": 314, "request_id": "task-199687cd20ab484e8251341d91535764", "timestamp": "2022-08-14T01:17:43.357198"} {"event": "Task authentik.outposts.tasks.outpost_service_connection_state[199687cd-20ab-484e-8251-341d91535764] raised unexpected: DockerException(\"Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))\")", "exc_info": ["<class 'docker.errors.DockerException'>", "DockerException(\"Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))\")", "<billiard.einfo.Traceback object at 0x14a577f46830>"], "level": "error", "logger": "celery.app.trace", "timestamp": 1660439863.357686}
edit: thank you for the solution; my issue is unrelated to op's
~~I have found similar behavior when manually deploying an LDAP outpost. If I use the IP for AUTHENTIK_HOST then it works; if I instead use the container_name for AUTHENTIK_HOST then it fails.~~
Example docker-compose.yml
---
version: '3.4'
services:
authentik_ldap_outpost:
image: ghcr.io/goauthentik/ldap:2022.8.2
restart: unless-stopped
environment:
# AUTHENTIK_HOST: https://__IP__:9443 # works
# AUTHENTIK_HOST: https://__SERVER_CONTAINER_NAME__:9443 # errors
AUTHENTIK_INSECURE=true
AUTHENTIK_TOKEN=SECRET
ports:
- 389:3389
- 636:6636
networks:
- authentik
networks:
authentik:
name: authentik
The above is deployed alongside the server's docker-compose, where the server is on the authentik network.
In my case, __IP__=192.168.1.118 and __SERVER_CONTAINER_NAME__=authentik_server
When running via IP, then the docker-compose shows successful connection, for example:
authentik_ldap_outpost_1 | {"event":"Fetched outpost configuration","level":"debug","logger":"authentik.outpost.ak-api-controller","name":"ldap-outpost","timestamp":"2022-09-14T18:28:23Z"}
When running via container_name, then we see errors such as:
authentik_ldap_outpost_1 | {"error":"400 Bad Request","event":"Failed to fetch outpost configuration, retrying in 3 seconds","level":"error","logger":"authentik.outpost.ak-api-controller","timestamp":"2022-09-14T18:36:23Z"}
Below is proof that both IP and container_name are available within the LDAP container:
Running docker-compose run --rm --entrypoint=ping authentik_ldap_outpost 192.168.1.118 shows:
Creating authentik_ldap_authentik_ldap_outpost_run ... done
PING 192.168.1.118 (192.168.1.118): 56 data bytes
64 bytes from 192.168.1.118: seq=0 ttl=64 time=0.602 ms
64 bytes from 192.168.1.118: seq=1 ttl=64 time=0.267 ms
64 bytes from 192.168.1.118: seq=2 ttl=64 time=0.170 ms
Running docker-compose run --rm --entrypoint=ping authentik_ldap_outpost authentik_server shows:
Creating authentik_ldap_authentik_ldap_outpost_run ... done
PING authentik_server (172.21.0.2): 56 data bytes
64 bytes from 172.21.0.2: seq=0 ttl=64 time=0.964 ms
64 bytes from 172.21.0.2: seq=1 ttl=64 time=0.244 ms
64 bytes from 172.21.0.2: seq=2 ttl=64 time=0.224 ms
@sam-brownlow your issue is unrelated; the 400 bad request in your case is caused by docker-compose (note the hyphen, docker space compose doesn't have this issue) giving containers names with underscores, which are not valid DNS names, and as such django and resulting from that authentik, doesn't allow that
It's working now 🚀 Thanks for the advice, @BeryJu ❤️
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.