jupyterhub-example-kerberos
jupyterhub-example-kerberos copied to clipboard
Spawning new user is too slow, times out
Kerberos (kinit) is very slow ~180 seconds to spawn another container. Messages (note timestamps):
kdc_1 | May 30 22:30:04 f34c218f21d9 krb5kdc[12](info): AS_REQ (6 etypes {18 17 16 23 25 26}) 172.19.0.4: NEEDED_PREAUTH: [email protected] for krbtgt/[email protected], Additional pre-authentication required
kdc_1 | May 30 22:30:24 f34c218f21d9 krb5kdc[12](info): AS_REQ (6 etypes {18 17 16 23 25 26}) 172.19.0.4: ISSUE: authtime 1527719424, etypes {rep=18 tkt=18 ses=18}, [email protected] for krbtgt/[email protected]
kdc_1 | May 30 22:32:31 f34c218f21d9 krb5kdc[12](info): AS_REQ (6 etypes {18 17 16 23 25 26}) 172.19.0.4: NEEDED_PREAUTH: [email protected] for krbtgt/[email protected], Additional pre-authentication required
kdc_1 | May 30 22:32:52 f34c218f21d9 krb5kdc[12](info): AS_REQ (6 etypes {18 17 16 23 25 26}) 172.19.0.4: ISSUE: authtime 1527719572, etypes {rep=18 tkt=18 ses=18}, [email protected] for krbtgt/[email protected]
There's a work around / solution:
Adding the following to krb5.conf disables distributed checks and reverse DNS lookups, which were not working within minikube cluster. Add these lines to the [libdefaults]
section:
dns_lookup_realm = false
dns_lookup_kdc = false
dns_fallback = false
I got some deeper understanding of this though this link https://kerberos.mit.narkive.com/mf3vf81O/slow-response-with-multiple-kdcs
The symptoms I observed were that kerberos was looking through a list of possible kdc master candidates before getting the right one. Having both a correct nameserver in /etc/resolv.conf
as well as the right kdc master names in /etc/krb5.conf
helped us with the issue