authentik
authentik copied to clipboard
LDAP Outposts on Kubernetes: I couldn't expose ldap out of k8s cluster
I use below link for setup first ldap outposts: https://version-2023-10.goauthentik.io/docs/providers/ldap/generic_setup
- I create a user and service account and also create one group and assign user and service account to it.
- create provider & application: LDAP
- create flows and stages
- create outposts with below config:
log_level: trace docker_labels: null authentik_host: http://authentik.example.com/ docker_network: null container_image: null docker_map_ports: true kubernetes_replicas: 1 kubernetes_namespace: devops authentik_host_browser: "" object_naming_template: ak-outpost-%(name)s authentik_host_insecure: false kubernetes_json_patches: null kubernetes_service_type: ClusterIP kubernetes_image_pull_secrets: [] kubernetes_ingress_class_name: null kubernetes_disabled_components: [] kubernetes_ingress_annotations: kubernetes.io/ingress.class: nginx kubernetes_ingress_secret_name: ak-outpost-ldap
everything is ok in UI of authentik and log of ouposts
Logs Output of docker-compose logs or kubectl logs respectively {"event":"Update providers","level":"info","logger":"authentik.outpost.ldap","timestamp":"2023-11-18T16:50:30Z"} {"event":"hello'd","level":"trace","logger":"authentik.outpost.ak-api-controller","loop":"ws-health","timestamp":"2023-11-18T16:50:38Z"} {"event":"hello'd","level":"trace","logger":"authentik.outpost.ak-api-controller","loop":"ws-health","timestamp":"2023-11-18T16:50:48Z"} {"event":"hello'd","level":"trace","logger":"authentik.outpost.ak-api-controller","loop":"ws-health","timestamp":"2023-11-18T16:50:58Z"} {"event":"hello'd","level":"trace","logger":"authentik.outpost.ak-api-controller","loop":"ws-health","timestamp":"2023-11-18T16:51:08Z"} {"event":"hello'd","level":"trace","logger":"authentik.outpost.ak-api-controller","loop":"ws-health","timestamp":"2023-11-18T16:51:18Z"}
Version and Deployment (please complete the following information):
- authentik version: 2023.10.3
- Deployment: [helm]
know outpost create all resource in my k8s cluster but didn't create any ingress role to assighn ldap ports to outposts service. which part of my setup is wrong or what can do to resolve this problem?????
Same issue here. I'm able to get the ldap server exposed with the following changes to the outpost LDAP configuration under applications -> outposts -> LDAP outpost -> edit
kubernetes_json_patches:
deployment:
- op: replace
path: /spec/template/spec/containers/0/ports/0
value:
hostPort: 389
containerPort: 3389
- op: replace
path: /spec/template/spec/containers/0/ports/1
value:
hostPort: 636
containerPort: 6636
- op: replace
path: /spec/template/spec/containers/0/ports/2
This alone does not really help since I don't know of a way to update my DNS with the correct IP for outpost as is. Anytime the outpost gets scheduled on a new node everything using LDAP will break.
Maybe the same technique could be used for node affinity settings? Maybe I'm missing something entirely? As it is authentik LDAP seems utterly useless when authentik is deployed to k8s.
This is not related to authentik, it is related to the k8s infrastructure. Use a different Kubernetes service type like LoadBalancer
according to your K8s infrastructure. The default service type exposes the service only internal and need a gateway or ingress to route traffic from external.
See https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
@cowboyxup I know about it but would you please give me an example how to rote from ingress to ldap outpos because main helm of authentik has official ingress for the ui of authentik
@kawanjaberi you need an ingress controller that supports TCP. Ingress-nginx for example does not, but traefik might.
Create a separate Service
, in K8s, manually. Use type LoadBalancer
. I've installed MetalLB, so my service gets a dedicated IP with the LDAP service port exposed.
(sample, adjust)
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: ldap-server
annotations:
# metallb exposes the LDAP server on this IP
metallb.universe.tf/loadBalancerIPs: 192.168.74.100
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: ldap
port: 389
protocol: TCP
targetPort: 3389
- name: ldaps
port: 636
protocol: TCP
targetPort: 6636
selector:
app.kubernetes.io/name: authentik-outpost-ldap
goauthentik.io/outpost-name: <take-from-outpost-service>
goauthentik.io/outpost-type: ldap
goauthentik.io/outpost-uuid: <take-this-from-exisitng-ldap-outpost-service>
type: LoadBalancer
kubectl apply -f service.yaml
the recommended option to expose LDAP on kubernetes is setting kubernetes_service_type: LoadBalancer