[Bug] Adding a tag via CLI removes advertised tag
Is this a support request?
- [x] This is not a support request
Is there an existing issue for this?
- [x] I have searched the existing issues
Current Behavior
Since v0.26 adding tags via CLI seems to invalidate advertised tags, removing them completely from the tag list. This only happens after a new node advertising the same tag is added.
This can be reproduced by adding a node advertising a tag to headscale, then add a forced tag to that same node. If you then register another node advertising the same tag, the first node loses its advertised tag (for more details see Steps to reproduce).
This basically causes all my ACLs to break, since several nodes have lost their advertised tags with the update. This might be due to another related issues, which, however, I cannot easily reproduce.
If I can do anything more to help debug this, let me know!
Expected Behavior
CLI-added tags and advertised tags should not influence each other.
Steps To Reproduce
- Configure two tags, e.g.,
tag:vmandtag:test - On machine 1 run:
sudo tailscale login --login-server=https://<...> --authkey=<..> --advertise-tags=tag:vm - Run
headscale nodes ls --tagson the headscale server Result, as expected:
ID | Hostname | Name | MachineKey | NodeKey | User | IP addresses | Ephemeral | Last seen | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags
4 | test-vm-1 | test-vm-1 | [A5yrv] | [XuMNd] | vms | 100.64.0.2, fd7a:115c:a1e0::2 | false | 2025-05-19 13:17:35 | N/A | online | no | | | tag:vm
- On the server run
docker compose exec headscale headscale nodes tag -i 4 -t "tag:test"Result, still as expected:
ID | Hostname | Name | MachineKey | NodeKey | User | IP addresses | Ephemeral | Last seen | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags
4 | test-vm-1 | test-vm-1 | [A5yrv] | [XuMNd] | vms | 100.64.0.2, fd7a:115c:a1e0::2 | false | 2025-05-19 13:17:35 | N/A | online | no | tag:test | | tag:vm
- On the second machine run:
sudo tailscale login --login-server=https://<...> --authkey=<..> --advertise-tags=tag:vmResult:
ID | Hostname | Name | MachineKey | NodeKey | User | IP addresses | Ephemeral | Last seen | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags
4 | test-vm-1 | test-vm-1 | [A5yrv] | [XuMNd] | vms | 100.64.0.2, fd7a:115c:a1e0::2 | false | 2025-05-19 13:17:35 | N/A | online | no | tag:test | |
5 | test-vm-2 | test-vm-2 | [fQQNE] | [Blqu5] | vms | 100.64.0.3, fd7a:115c:a1e0::3 | false | 2025-05-19 13:19:20 | N/A | online | no | | | tag:vm
See that tag:vm is missing from test-vm-1.
Environment
- OS: Ubuntu 24.04
- Headscale version: 0.26
- Tailscale version: 1.82.5
Runtime environment
- [x] Headscale is behind a (reverse) proxy
- [x] Headscale runs in a container
Debug information
Policy:
{
"groups": {
"group:admins": ["fabian@"],
"group:vms": ["vms@"]
},
"tagOwners": {
"tag:vm": ["group:admins", "group:vms"],
"tag:test": ["fabian@"],
"tag:exit-node": ["group:admins"]
},
// Servers with tag exit node can advertise exit nodes without further approval
"autoApprovers": {
"exitNode": ["tag:exit-node"]
},
"acls": [
// Allow admins full access to VMs
{
"action": "accept",
"src": ["group:admins"],
"dst": [
"tag:vm:*"
]
}
]
}
I have the same issue with non docker install, fresh deployment on dedicated Ubuntu 24.04 server, using the latest amd64 deb file.
I looked into this, and made a simple integration test to reproduce the error. Very much just for visual, and doesn't actually validate how it should be yet, but I ran out of time.
func TestNodesTagss(t *testing.T) {
zerolog.SetGlobalLevel(zerolog.TraceLevel)
IntegrationSkip(t)
t.Parallel()
policy := &policyv2.Policy{
TagOwners: policyv2.TagOwners{
policyv2.Tag("tag:vm"): policyv2.Owners{usernameOwner("[email protected]")},
},
}
scenario, err := NewScenario(ScenarioSpec{})
assertNoErr(t, err)
defer scenario.ShutdownAssertNoPanics(t)
err = scenario.CreateHeadscaleEnv(
[]tsic.Option{},
hsic.WithACLPolicy(policy),
)
assertNoErr(t, err)
headscale, err := scenario.Headscale()
assertNoErr(t, err)
// create user
u, err := scenario.CreateUser("user1")
assertNoErr(t, err)
// create preauthkey for spinning up nodes
key, err := scenario.CreatePreAuthKey(u.GetId(), true, true)
if err != nil {
t.Fatalf("failed to create pre-auth key for user %s: %s", u.Name, err)
}
// create node with advertised tag `tag:vm`
err = scenario.CreateTailscaleNodesInUser(u.Name,
"all",
1,
tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]),
tsic.WithTags([]string{"tag:vm"}))
assertNoErr(t, err)
err = scenario.RunTailscaleUp(u.Name, headscale.GetEndpoint(), key.GetKey())
if err != nil {
t.Fatalf("failed to run tailscale up for user %s: %s", u.Name, err)
}
// add a forced tag to node
out, err := headscale.Execute(
[]string{
"headscale",
"nodes",
"tag",
"-i", "1",
"-t", "tag:lol",
"--output", "json",
},
)
assert.Nil(t, err)
t.Logf("Output: %+v\n", out)
// get nodes output
out, err = headscale.Execute(
[]string{
"headscale",
"nodes",
"list",
"--tags",
},
)
fmt.Println(out)
// create 2nd node, with advertised tag `tag:vm`
err = scenario.CreateTailscaleNodesInUser(u.Name,
"all",
1,
tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]),
tsic.WithTags([]string{"tag:vm"}))
assertNoErr(t, err)
err = scenario.RunTailscaleUp(u.Name, headscale.GetEndpoint(), key.GetKey())
if err != nil {
t.Fatalf("failed to run tailscale up for user %s: %s", u.Name, err)
}
// get nodes output
out, err = headscale.Execute(
[]string{
"headscale",
"nodes",
"list",
"--tags",
},
)
fmt.Println(out)
assertNoErr(t, errors.New("invalid"))
}
This issue is stale because it has been open for 90 days with no activity.