[Bug] Automatic route approval doesn't work for tagged preauth keys
Is this a support request?
- [x] This is not a support request
Is there an existing issue for this?
- [x] I have searched the existing issues
Current Behavior
This may be normal behavior, then just close the issue.
If for pre auth key set tag that route autoApprovers policy not work
Everything works correctly for keywords without tags.
Adding a tag to the 'autoApprovers' list will correct the automatic approval mechanism.
eg:
"autoApprovers": {
"routes": {
"10.0.0.0/8": [
"group:ops",
"tag:bastion"
]
}
}
Expected Behavior
Same behavior for tagged and untagged keys - ability to use group for automatic approval
Steps To Reproduce
Policy
{
"groups": {
"group:ops": [
"ops@",
]
},
"autoApprovers": {
"routes": {
"10.0.0.0/8": [
"group:ops"
]
}
}
}
Create key with tag
# user id=1 - `ops` user in `ops` group, in this context
headscale preauthkeys -u 1 create --reusable --ephemeral --tags "tag:bastion"
Client UP
tailscale up --authkey "$TAILSCALE_AUTH_KEY" \
--hostname "$TAILSCALE_HOSTNAME" \
--login-server "$TAILSCALE_LOGIN_SERVER" \
--advertise-routes "10.4.0.0/16"
Headscale routes list after client up
ID | Hostname | Approved | Available | Serving (Primary)
69 | uat-bastion | | 10.4.0.0/16 |
Environment
- OS: Debian 12
- Headscale version: 0.27.1
- Tailscale version:
Runtime environment
- [ ] Headscale is behind a (reverse) proxy
- [ ] Headscale runs in a container
Debug information
DB
sqlite> select id,hostname,host_info,approved_routes from nodes where id='81';
81|uat-bastion|{"OS":"linux","Distro":"debian","DistroVersion":"12.12","Hostname":"uat-bastion","Machine":"x86_64","GoArch":"amd64","RoutableIPs":["10.4.0.0/16"],"NetInfo":{"MappingVariesByDestIP":true,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":true,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":21,"DERPLatency":{"1-v4":0.009070842,"10-v4":0.081262963,"11-v4":0.13606014,"12-v4":0.02044391,"13-v4":0.061451616,"14-v4":0.081832816,"15-v4":0.241200031,"16-v4":0.039034265,"17-v4":0.074230918,"18-v4":0.082251698,"19-v4":0.10117933,"2-v4":0.082480161,"20-v4":0.234122746,"21-v4":0.008008314,"22-v4":0.101433938,"24-v4":0.123173362,"25-v4":0.234396404,"26-v4":0.099917196,"27-v4":0.015020913,"28-v4":0.123128849,"4-v4":0.087856278,"5-v4":0.226289572,"7-v4":0.184715257,"8-v4":0.078007608,"9-v4":0.052908437},"FirewallMode":"ipt-default"},"Cloud":"aws","Userspace":false,"UserspaceRouter":false,"AppConnector":false,"StateEncrypted":false}|
Debug log:
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "info",
"node.id": 81,
"node.name": "uat-bastion",
"time": 1763122351,
"message": "Node connected"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/routes/primary.go:157",
"node.id": 81,
"prefixes": [],
"time": 1763122351,
"message": "PrimaryRoutes.SetRoutes called"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/routes/primary.go:49",
"time": 1763122351,
"message": "updatePrimaryLocked starting"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/routes/primary.go:81",
"prefix": "10.252.18.0/24",
"availableNodes": [2],
"time": 1763122351,
"message": "Processing prefix for primary route selection"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/routes/primary.go:140",
"changed": false,
"finalState": "Available routes:
Node 2: 10.254.6.0/24, 10.18.0.0/16,
Node 69: 10.32.0.0/16
Current primary routes:
Route 10.254.6.0/24: 2
Route 10.18.0.0/16: 2
"time": 1763122351,
"message": "updatePrimaryLocked completed"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/routes/primary.go:175",
"node.id": 81,
"wasPresent": false,
"changed": false,
"newState": "Available routes:
Node 2: 10.254.6.0/24, 10.18.0.0/16,
Node 69: 10.32.0.0/16
Current primary routes:
Route 10.254.6.0/24: 2
Route 10.18.0.0/16: 2",
"time": 1763122351,
"message": "SetRoutes completed (remove)"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "info",
"caller": "/home/runner/work/headscale/headscale/hscontrol/poll.go:383",
"omitPeers": false,
"stream": true,
"node.id": 81,
"node.name": "uat-bastion",
"time": 1763122351,
"message": "node has connected, mapSession: 0xc0001f9c80, chan: 0xc00028afc0"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/mapper/batcher_lockfree.go:499",
"node.id": 81,
"chan": "0xc00028afc0",
"conn.id": "b18a740e7149fe3d",
"time": 1763122351,
"message": "addConnection: waiting for mutex - POTENTIAL CONTENTION POINT"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/mapper/batcher_lockfree.go:507",
"node.id": 81,
"chan": "0xc00028afc0",
"conn.id": "b18a740e7149fe3d",
"total_connections": 1,
"mutex_wait_time": 0.009107,
"time": 1763122351,
"message": "Successfully added connection after mutex wait"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/mapper/batcher_lockfree.go:101",
"node.id": 81,
"total.duration": 8.37243,
"active.connections": 1,
"time": 1763122351,
"message": "Node connection established in batcher because AddNode completed successfully"
}
Nov 14 12:12:31 ip-10-255-1-11 headscale[266502]: {
"level": "debug",
"caller": "/home/runner/work/headscale/headscale/hscontrol/poll.go:232",
"node.id": 81,
"node.name": "uat-bastion",
"time": 1763122351,
"message": "AddNode succeeded in poll session because node added to batcher"
}
Did this work in 0.26.1? Just want to classify it, and if it did not, I will tag it for the next release which is focusing on fixing tags to behave correctly.
H @kradalby
I can't say for sure, in version 0.27.1 I first used preAuthkey with a tag, so as not to declare tags separately for each host, before I simply did not use this feature.
I'm not sure I'll have time to test version 0.26.1 for this behavior anytime soon.
Don’t worry, I’ll treat it as a tag problem and put it under 0.28