caddy-security
caddy-security copied to clipboard
question: Logged out on caddy restart
I've setup Caddy as per the Google Identity Platform instructions, working with the associated Caddyfile.
This works great. Caddy Security is authenticating users and passing the X-Token-... headers through to the backend service just fine. However, restarting caddy causes all users to be forced back through Google's OAuth flow.
It seems to me that my setup of Caddy Security is storing session information in memory or other ephemeral storage. Is there a way I can ensure Caddy Security doesn't force users to re-authenticate after a restart? Thank you.
Edit: My Caddyfile is as follows:
{
# TODO: Comment out debugging output
debug
# Use Let's Encrypt with the DNS-01 solver
acme_ca https://acme-v02.api.letsencrypt.org/directory
acme_dns route53
storage file_system /var/lib/caddy/
order authenticate before respond
order authorize before basicauth
# Google oauth authenication
security {
oauth identity provider google {
realm google
driver google
client_id ...
client_secret ...
scopes openid email profile
}
authentication portal my_portal {
crypto default token lifetime 3600
crypto key sign-verify {{ infra_gateway_jwt_shared_secret }}
enable identity provider google
cookie domain example.com
ui {
links {
"My Identity" "/whoami" icon "las la-user"
}
}
transform user {
match realm google
action add role authp/user
ui link "my_service" https://my_service.example.com icon "las la-my_service"
}
# TODO: One block for each admin
transform user {
match realm google
match email [email protected]
action add role authp/admin
}
}
authorization policy my_policy {
set auth url https://auth.example.com/oauth2/google
crypto key verify {{ infra_gateway_jwt_shared_secret }}
allow roles authp/admin authp/user
validate bearer header
inject headers with claims
inject header "X-Token-Family-Name" from family_name
inject header "X-Token-Given-Name" from given_name
inject header "X-Token-Picture" from picture
}
}
}
(tls_config) {
tls {
dns route53 {
}
}
}
auth.example.com {
import tls_config
authenticate with my_portal
}
*.example.com, example.com {
# import tls_config
@foo host foo.example.com
handle @foo {
respond "Foo!"
}
@bar host bar.example.com
handle @bar {
respond "Bar!"
}
@my_service host my_service.example.com
handle @my_service {
reverse_proxy {{ my_service_ip }}
route {
authorize with my_policy
}
}
# Fallback for otherwise unhandled domains
handle {
abort
}
}
It seems to me that my setup of Caddy Security is storing session information in memory or other ephemeral storage. Is there a way I can ensure Caddy Security doesn't force users to re-authenticate after a restart? Thank you.
@adamcharnock , the only way the users will be required to authenticate again is if the crypto key changes. Are you sure that {{ infra_gateway_jwt_shared_secret }} is resolved to something and it is not empty?
Also, this directive is incorrect.
@my_service host my_service.example.com
handle @my_service {
reverse_proxy {{ my_service_ip }}
route {
authorize with my_policy
}
}
It should be
@my_service host my_service.example.com
handle @my_service {
authorize with my_policy
reverse_proxy {{ my_service_ip }}
}
Thank you very much for your fast response @greenpau. I have made the correction you pointed out.
Re the tokens, I have checked the Caddyfile and the relevant lines are as follows:
crypto key sign-verify MuxXWyjXMlLpHMCFzKyJTtgISpn6B8NyjWiaTJsi9Z4l4htjjEmrIUQS6NsEw2QF
...
crypto key verify MuxXWyjXMlLpHMCFzKyJTtgISpn6B8NyjWiaTJsi9Z4l4htjjEmrIUQS6NsEw2QF
(Those are fresh random strings, but representative of what I am using. Length and used characters are the same. I have confirmed both values are the same)
FWIW, I'm also setting the JWT_SECRET environment variable in the service's environment. I'm not sure if that is required, but I saw a mention of it somewhere when googling around this problem over the weekend.
@adamcharnock , if the crypto keys are filled with blank values, then caddy-security auto-generates them at startup. If the keys are auto-generated, then the old tokens using the old key are no longer valid and you will have to re-auth.
did hard-coding the values solved your issue?