k3s icon indicating copy to clipboard operation
k3s copied to clipboard

2nd server fails to join existing cluster - starting kubernetes: preparing server: bootstrap data already found and encrypted with different token

Open papapumpnz opened this issue 3 years ago • 22 comments

Environmental Info: K3s Version: k3s version v1.21.3+k3s1 (1d1f220f) go version go1.16.6

Node(s) CPU architecture, OS, and Version: Linux pchost0 5.4.0-81-generic #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration: 1 master, 5 agents

Describe the bug:

Adding second server to existing cluster, it fails to start the k3s service with the following error:

level=fatal msg="starting kubernetes: preparing server: bootstrap data already found and encrypted with different token"

Steps To Reproduce:

  • Installed K3s:
  • curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -s - server --datastore-endpoint="mysql://user:password@tcp(xxx.home:3312)/kubernetes" --node-taint CriticalAddonsOnly=true:NoExecute --tls-san cluster.home

Expected behavior:

Server should join the cluster and the k3s service should start

Actual behavior:

k3s service fails to start.

Additional context / logs:

This is a fresh install of Ubuntu 20. No existing installations attempted before running into this error. Existing cluster is working fine. All nodes joined. Upgraded a node after this error fine. Upgraded existing server fine after this also without issue. Load balancer working fine with dns cluster.home with the existing server as the only member.

Aug 19 08:05:12 pchost0 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 48.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit k3s.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Aug 19 08:05:12 pchost0 systemd[1]: Stopped Lightweight Kubernetes.
-- Subject: A stop job for unit k3s.service has finished
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A stop job for unit k3s.service has finished.
--
-- The job identifier is 70877 and the job result is done.
Aug 19 08:05:12 pchost0 systemd[1]: Starting Lightweight Kubernetes...
-- Subject: A start job for unit k3s.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit k3s.service has begun execution.
--
-- The job identifier is 70877.
Aug 19 08:05:12 pchost0 sh[35423]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Aug 19 08:05:12 pchost0 sh[35429]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Aug 19 08:05:12 pchost0 k3s[35439]: time="2021-08-19T08:05:12.449738243Z" level=info msg="Starting k3s v1.21.3+k3s1 (1d1f220f)"
Aug 19 08:05:12 pchost0 k3s[35439]: time="2021-08-19T08:05:12.457797198Z" level=info msg="Configuring mysql database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
Aug 19 08:05:12 pchost0 k3s[35439]: time="2021-08-19T08:05:12.457861474Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
Aug 19 08:05:12 pchost0 k3s[35439]: time="2021-08-19T08:05:12.461610412Z" level=info msg="Database tables and indexes are up to date"
Aug 19 08:05:12 pchost0 k3s[35439]: time="2021-08-19T08:05:12.467076907Z" level=info msg="Kine listening on unix://kine.sock"
Aug 19 08:05:12 pchost0 k3s[35439]: time="2021-08-19T08:05:12.492501196Z" level=fatal msg="starting kubernetes: preparing server: bootstrap data already found and encrypted with different token"

Backporting

  • [ ] Needs backporting to older releases

papapumpnz avatar Aug 19 '21 08:08 papapumpnz

You need to provide the same --token to both servers when joining the cluster. If you didn't specify the token when starting the first server, you can get it off the disk on that node, and provide it to the second node.

brandond avatar Aug 19 '21 20:08 brandond

Yes thank you. I did try that using

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -s - server --token K10d54038c7b1c717cf83e24c664cce7f0c0f77f855639e1de62cc1d6c899e8cbdf::server:8c123b4e1c9559f2b7e13ed21a1aada7 --datastore-endpoint="mysql://user:pass@tcp(pchost1.home:3312)/kubernetes" --node-taint CriticalAddonsOnly=true:NoExecute --tls-san cluster.home

But same issue. I did not use --token when installing the first server through, and obtained the token from that server to install the second.

papapumpnz avatar Aug 19 '21 22:08 papapumpnz

Can you try with 1.21.4?

brandond avatar Aug 20 '21 04:08 brandond

ok tried that, same issue. I did not update the exsiting joined server to 1.21.4

[INFO]  Finding release for channel latest
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details.
pchost@pchost0:~$ journalctl -xe
-- The job identifier is 1586415.
Aug 20 09:19:42 pchost0 sh[539918]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Aug 20 09:19:42 pchost0 sh[539924]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Aug 20 09:19:42 pchost0 k3s[539935]: time="2021-08-20T09:19:42.452135577Z" level=info msg="Starting k3s v1.21.4+k3s1 (3e250fdb)"
Aug 20 09:19:42 pchost0 k3s[539935]: time="2021-08-20T09:19:42.458345643Z" level=info msg="Configuring mysql database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
Aug 20 09:19:42 pchost0 k3s[539935]: time="2021-08-20T09:19:42.458388162Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
Aug 20 09:19:42 pchost0 k3s[539935]: time="2021-08-20T09:19:42.463098096Z" level=info msg="Database tables and indexes are up to date"
Aug 20 09:19:42 pchost0 k3s[539935]: time="2021-08-20T09:19:42.474353568Z" level=info msg="Kine listening on unix://kine.sock"
Aug 20 09:19:42 pchost0 k3s[539935]: time="2021-08-20T09:19:42.482050133Z" level=fatal msg="starting kubernetes: preparing server: bootstrap data already found and encrypted with different token"
Aug 20 09:19:42 pchost0 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit k3s.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Aug 20 09:19:42 pchost0 systemd[1]: k3s.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit k3s.service has entered the 'failed' state with result 'exit-code'.
Aug 20 09:19:42 pchost0 systemd[1]: Failed to start Lightweight Kubernetes.
-- Subject: A start job for unit k3s.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit k3s.service has finished with a failure.
--
-- The job identifier is 1586415 and the job result is failed.

financefeast avatar Aug 20 '21 09:08 financefeast

Update the existing server first, then the one you're attempting to join. Confirm that you're using the token from disk off the first server?

brandond avatar Aug 21 '21 02:08 brandond

Ok current master upgraded to 1.21.4

pchost@pchost1:~$ kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
pchost1.home   Ready    control-plane,master   53d   v1.21.4+k3s1

Installing second master with token obtained from the first. Thats confirmed. Still same issue.

Aug 22 02:25:03 pchost7 sh[5125]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Aug 22 02:25:03 pchost7 sh[5126]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Aug 22 02:25:03 pchost7 k3s[5129]: time="2021-08-22T02:25:03.627737816Z" level=info msg="Starting k3s v1.21.4+k3s1 (3e250fdb)"
Aug 22 02:25:03 pchost7 k3s[5129]: time="2021-08-22T02:25:03.634175363Z" level=info msg="Configuring mysql database connection pooling: maxIdleConns=2, maxOpenConns=0, co>
Aug 22 02:25:03 pchost7 k3s[5129]: time="2021-08-22T02:25:03.634209704Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
Aug 22 02:25:03 pchost7 k3s[5129]: time="2021-08-22T02:25:03.638262194Z" level=info msg="Database tables and indexes are up to date"
Aug 22 02:25:03 pchost7 k3s[5129]: time="2021-08-22T02:25:03.648895469Z" level=info msg="Kine listening on unix://kine.sock"
Aug 22 02:25:03 pchost7 k3s[5129]: time="2021-08-22T02:25:03.656249538Z" level=fatal msg="starting kubernetes: preparing server: bootstrap data already found and encrypte>
Aug 22 02:25:03 pchost7 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE

Out of desperation I have reimaged the second machine, using a different hostname and tried installing again. Exactly same issue with 1.21.4.

financefeast avatar Aug 22 '21 02:08 financefeast

I'm unable to reproduce this. Can you provide the exact commands you're using to install the second node?

brandond avatar Aug 26 '21 03:08 brandond

DB user and pass substituted, but the exact command is below

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -s - server --token K10d54038c7b1c717cf83e24c664cce7f0c0f77f855639e1de62cc1d6c899e8cbdf::server:8c123b4e1c9559f2b7e13ed21a1aada7 --datastore-endpoint="mysql://user:pass@tcp(pchost1.home:3312)/kubernetes" --node-taint CriticalAddonsOnly=true:NoExecute --tls-san cluster.home

financefeast avatar Aug 26 '21 04:08 financefeast

You need to provide the same --token to both servers when joining the cluster. If you didn't specify the token when starting the first server, you can get it off the disk on that node, and provide it to the second node.

This helped me in v1.21.3+k3s1 with PostgreSQL as External DB. Many Thanks!

armourshield avatar Aug 26 '21 15:08 armourshield

So im getting the token via this command on the first server:

sudo cat /var/lib/rancher/k3s/server/node-token

This is the token i'm supplying after running the above command

K10d54038c7b1c717cf83e24c664cce7f0c0f77f865639e1de62cc1d6c899e8cbdf::server:8c123b4e1c9559f2b7e13ed21a1aada7

Its the same token i've successfully bootstrapped the nodes with. I tried supplying that token and upgrading the first server and also received the same error "bootstrap data already found and encrypted with different token"

financefeast avatar Aug 28 '21 00:08 financefeast

And this is the exact command used to install the second server (minus creds)

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -s - server --token K10d54038c7b1c717cf83e24c664cce7f0c0f77f855639e1de62cc1d6c899e8cbdf::server:8c123b4e1c9559f2b7e13ed21a1aada7 --datastore-endpoint="mysql://user:pass@tcp(pchost1.home:3312)/kubernetes" --node-taint CriticalAddonsOnly=true:NoExecute --tls-san cluster.home

and tried using ENV VAR K3S_TOKEN rather than arg --token

curl -sfL htt0ps://get.k3s.io | INSTALL_K3S_CHANNEL=latest K3S_TOKEN="K10d54038c7b1c717cf83e24c664cce7f0c0f77f865639e1de62cc1d6c899e8cbdf::server:8c123b4e1c9559f2b7e13ed21a1a" INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -s  - server --datastore-endpoint="mysql://user:password@tcp(dbserver:3312)/kubernetes" --node-taint CriticalAddonsOnly=true:NoExecute --tls-san cluster.home

Both same result

financefeast avatar Aug 28 '21 03:08 financefeast

I think the token taken is wrong. I used token from /var/lib/rancher/k3s/server/token.

armourshield avatar Aug 30 '21 07:08 armourshield

Thanks for everyones help here. In utter desperation and despair I completly destroyed the existing cluster and set it up again from scratch. Adding the second server this time around worked fine and the cluster is again happy.

This was the command I used to install the second cluster server

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -s - server --token K10d54038c7b1c717cf83e24c664cce7f0c0f77f855639e1de62cc1d6c899e8cbdf::server:8c123b4e1c9559f2b7e13ed21a1aada7 --datastore-endpoint="mysql://user:pass@tcp(pchost1.home:3312)/kubernetes" --node-taint CriticalAddonsOnly=true:NoExecute --tls-san cluster.home

financefeast avatar Aug 30 '21 08:08 financefeast

Is there any way to recover from this which doesn't involve rerolling the cluster?

We've been using the following for around a year now with no issues until this week.

curl -sfL https://get.k3s.io | \
            INSTALL_K3S_CHANNEL="v1.19" \
            K3S_DATASTORE_CAFILE="/srv/rds-combined-ca-bundle.pem" \
            K3S_DATASTORE_ENDPOINT="postgres://{{ sceptre_user_data.db_user }}:{{ sceptre_user_data.db_pass }}@redacted.cu0h45m70q9e.us-west-2.rds.amazonaws.com:5432/{{ sceptre_user_data.db_name }}" \
            INSTALL_K3S_EXEC="--tls-san k3s-server.internal.{{ sceptre_user_data.domain_name }} --disable traefik --node-taint k3s-controlplane=true:NoExecute" \
            sh -

I've tried upgrading to v1.21, tried using K3S_TOKEN with the same token the agents use to connect, same issue.

I'm not sure if something's changed with v1.19 that regressed this, or if all my masters just happened to go down at the same time and resulted in this?

Rolling back to the v1.18 channel is the closest I've gotten to this working, most data seems to be there, including pods; but the cluster's certificate-authority-data and users got reset.

When rolling back to 1.18 I was using K3S_TOKEN="K10readacted::server:redacted", which is the same token the agents were originally using. The redacted part after K10 is different on the server sudo cat /var/lib/rancher/k3s/server/node-token, but the last redacted is the same as K3S_TOKEN was set to.

rlabrecque avatar Sep 01 '21 23:09 rlabrecque

@rlabrecque grab the token off the first server you upgraded, and add it as --token=<TOKEN> to the INSTALL_K3S_EXEC string.

The logic behind this change is explained in the advisory: https://github.com/k3s-io/k3s/security/advisories/GHSA-cxm9-4m6p-24mc - essentially, all your servers were previously using an empty string as the datastore encryption token; now they properly use the first server's token - which means you need to provide the token when adding new servers.

brandond avatar Sep 01 '21 23:09 brandond

These servers were running on EC2 spot instances with autoscaling and no persistence; they have been restarting every month or two, so I don't really have that option. I believe the token I grabbed originally was likely not from the first master, and I may have also grabbed the one from "/var/lib/rancher/k3s/server/token" instead of node-token like someone above.

But; I think I'm in a good spot now with rolling back to 1.18 and regenerating effectively my kubeconfig + tokens, then I'll roll forward with this token to 1.21, and re-setup my users. 🙏

rlabrecque avatar Sep 01 '21 23:09 rlabrecque

We put a warning about this in the SA but I'm concerned folks didn't see it :/

If servers are in an auto-scaling group, ensure that the server image is to include the token value before upgrading. If existing nodes are upgraded and then subsequently deleted prior to an administrator retrieving the randomly-generated token, there will be no nodes left from which to recover the token.

If you'd set the token from the get-go you would have been fine, this only affects folks who let the first server auto-generate a token. We should have been enforcing use of a token from the start.

brandond avatar Sep 01 '21 23:09 brandond

Eh I'm pretty sure my launch options came straight from the documentation ~2 years ago. Token stuff is still very unclear (and contradictory) in k3s today, for example this section of the live documentation still doesn't have tokens being set on the server: https://github.com/rancher/docs/blob/master/content/k3s/latest/en/installation/ha/_index.md#2-launch-server-nodes And this thread which has a lot of shared confusion: https://github.com/k3s-io/k3s/discussions/3443

I definitely didn't see the SA, but ultimately I would have even preferred a clear fatal error telling me that the token isn't being explicitly set and instructions on how to fix that. I may have been able to recover easily if I seen that instead of the rather cryptic error message in the OP. (I probably flailed around a bit trying to fix things, which definitely caused me to lose the new token.)

With how channels work there's also not really a clear "before upgrading" either, because we haven't really touched this cluster for the better part of a year, so this was a little surprising!

That being said I am all back up and running now, downgrading to 1.18, then grabbing the new token from there, recreating users, service accounts / clusterbindings and redownloading the kubeconfig, was all that I really needed to do in the end. Hopefully this discussion gets enough info out for the next person, even having the advisory posted here sooner would have helped me out 👍

rlabrecque avatar Sep 02 '21 02:09 rlabrecque

I ran into the same issue, thankfully I had an hourly backup of my k3s MySQL database. What I did was to remove the new row with a name like this /bootstrap/xxxxxx and import the backup row with a name like this /bootstrap/xxxxxx.

Then I executed the k3s install command again on my second master with a pre-defined token (--token), and I was good to go!

unixfox avatar Nov 24 '21 22:11 unixfox

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

stale[bot] avatar May 24 '22 01:05 stale[bot]

Still relevant

unixfox avatar May 24 '22 05:05 unixfox

I'm facing this issue on a rancher install using docker. After trying to update from 2.6.2 to 2.6.8 i'm getting this

INFO: Running k3s server --cluster-init --cluster-reset
ERROR:
time="2022-09-05T23:36:37Z" level=warning msg="remove /var/lib/rancher/k3s/agent/etc/k3s-agent-load-balancer.json: no such file or directory"
time="2022-09-05T23:36:37Z" level=warning msg="remove /var/lib/rancher/k3s/agent/etc/k3s-api-server-agent-load-balancer.json: no such file or directory"
time="2022-09-05T23:36:37Z" level=info msg="Starting k3s v1.24.1+k3s1 (0581808f)"
time="2022-09-05T23:36:37Z" level=info msg="Managed etcd cluster bootstrap already complete and initialized"
time="2022-09-05T23:36:37Z" level=info msg="Starting temporary etcd to reconcile with datastore"
{"level":"info","ts":"2022-09-05T23:36:37.057Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["http://127.0.0.1:2400"]}
{"level":"info","ts":"2022-09-05T23:36:37.057Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2399"]}
{"level":"info","ts":"2022-09-05T23:36:37.057Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.18.1","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":true,"name":"9b5cfba6ec57-84caf3d7","data-dir":"/var/lib/rancher/k3s/server/db/etcd-tmp","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd-tmp/member","force-new-cluster":true,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://127.0.0.1:2400"],"listen-peer-urls":["http://127.0.0.1:2400"],"advertise-client-urls":["http://127.0.0.1:2399"],"listen-client-urls":["http://127.0.0.1:2399"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2022-09-05T23:36:37.075Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db","took":"18.267989ms"}
{"level":"info","ts":"2022-09-05T23:36:37.561Z","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":379204011,"snapshot-size":"77 kB"}
{"level":"info","ts":"2022-09-05T23:36:37.561Z","caller":"etcdserver/server.go:521","msg":"recovered v3 backend from snapshot","backend-size-bytes":38076416,"backend-size":"38 MB","backend-size-in-use-bytes":38060032,"backend-size-in-use":"38 MB"}
{"level":"info","ts":"2022-09-05T23:36:37.740Z","caller":"etcdserver/raft.go:556","msg":"forcing restart member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","commit-index":379226573}
{"level":"info","ts":"2022-09-05T23:36:37.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"}
{"level":"info","ts":"2022-09-05T23:36:37.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 585"}
{"level":"info","ts":"2022-09-05T23:36:37.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 585, commit: 379226573, applied: 379204011, lastindex: 379226573, lastterm: 585]"}
{"level":"info","ts":"2022-09-05T23:36:37.743Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.4"}
{"level":"info","ts":"2022-09-05T23:36:37.743Z","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","recovered-remote-peer-id":"8e9e05c52164694d","recovered-remote-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2022-09-05T23:36:37.743Z","caller":"membership/cluster.go:287","msg":"set cluster version from store","cluster-version":"3.4"}
{"level":"warn","ts":"2022-09-05T23:36:37.744Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2022-09-05T23:36:37.745Z","caller":"mvcc/kvstore.go:345","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":358413191}
{"level":"info","ts":"2022-09-05T23:36:37.774Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":358414811}
{"level":"info","ts":"2022-09-05T23:36:37.775Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2022-09-05T23:36:37.776Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"8e9e05c52164694d","timeout":"15s"}
{"level":"info","ts":"2022-09-05T23:36:37.786Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"8e9e05c52164694d"}
{"level":"info","ts":"2022-09-05T23:36:37.786Z","caller":"etcdserver/server.go:842","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.3","cluster-id":"cdf818194e3a8c32","cluster-version":"3.4"}
{"level":"info","ts":"2022-09-05T23:36:37.786Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8e9e05c52164694d","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"election-timeout":"5s"}
{"level":"info","ts":"2022-09-05T23:36:37.790Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://127.0.0.1:2400"],"listen-peer-urls":["http://127.0.0.1:2400"],"advertise-client-urls":["http://127.0.0.1:2399"],"listen-client-urls":["http://127.0.0.1:2399"],"listen-metrics-urls":[]}
{"level":"info","ts":"2022-09-05T23:36:37.790Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"127.0.0.1:2400"}
{"level":"info","ts":"2022-09-05T23:36:37.790Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"127.0.0.1:2400"}
{"level":"info","ts":"2022-09-05T23:36:37.846Z","caller":"membership/cluster.go:576","msg":"updated cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","from":"3.4","to":"3.5"}
{"level":"info","ts":"2022-09-05T23:36:37.846Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-09-05T23:36:37.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"}
{"level":"info","ts":"2022-09-05T23:36:37.848Z","caller":"membership/cluster.go:554","msg":"updated member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","updated-remote-peer-id":"8e9e05c52164694d","updated-remote-peer-urls":["https://172.17.0.2:2380"]}
{"level":"info","ts":"2022-09-05T23:36:41.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 585"}
{"level":"info","ts":"2022-09-05T23:36:41.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 585"}
{"level":"info","ts":"2022-09-05T23:36:41.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 585"}
{"level":"info","ts":"2022-09-05T23:36:41.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 586"}
{"level":"info","ts":"2022-09-05T23:36:41.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 586"}
{"level":"info","ts":"2022-09-05T23:36:41.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 586"}
{"level":"info","ts":"2022-09-05T23:36:41.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 586"}
{"level":"info","ts":"2022-09-05T23:36:41.745Z","caller":"etcdserver/server.go:2044","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:9b5cfba6ec57-84caf3d7 ClientURLs:[http://127.0.0.1:2399]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"15s"}
{"level":"info","ts":"2022-09-05T23:36:41.745Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-05T23:36:41.745Z","caller":"embed/serve.go:140","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2399"}
time="2022-09-05T23:36:41Z" level=info msg="Defragmenting etcd database"
{"level":"info","ts":"2022-09-05T23:36:41.747Z","caller":"v3rpc/maintenance.go:89","msg":"starting defragment"}
{"level":"info","ts":"2022-09-05T23:36:41.750Z","caller":"backend/backend.go:497","msg":"defragmenting","path":"/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db","current-db-size-bytes":38076416,"current-db-size":"38 MB","current-db-size-in-use-bytes":38064128,"current-db-size-in-use":"38 MB"}
{"level":"info","ts":"2022-09-05T23:36:42.111Z","caller":"backend/backend.go:549","msg":"finished defragmenting directory","path":"/var/lib/rancher/k3s/server/db/etcd-tmp/member/snap/db","current-db-size-bytes-diff":0,"current-db-size-bytes":38076416,"current-db-size":"38 MB","current-db-size-in-use-bytes-diff":-4096,"current-db-size-in-use-bytes":38060032,"current-db-size-in-use":"38 MB","took":"363.40597ms"}
{"level":"info","ts":"2022-09-05T23:36:42.111Z","caller":"v3rpc/maintenance.go:95","msg":"finished defragment"}
{"level":"warn","ts":"2022-09-05T23:36:42.111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-09-05T23:36:41.747Z","time spent":"363.558761ms","remote":"127.0.0.1:58448","response type":"/etcdserverpb.Maintenance/Defragment","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
time="2022-09-05T23:36:42Z" level=info msg="etcd temporary data store connection OK"
time="2022-09-05T23:36:42Z" level=info msg="Reconciling bootstrap data between datastore and disk"
time="2022-09-05T23:36:42Z" level=fatal msg="Failed to reconcile with temporary etcd: bootstrap data already found and encrypted with different token"

The cluster was setup a long time ago and was provided by the image from rancher itself. So the token that I have is the only token that I have on my backups

ghost avatar Sep 05 '22 23:09 ghost

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

stale[bot] avatar Mar 05 '23 03:03 stale[bot]