boundary icon indicating copy to clipboard operation
boundary copied to clipboard

got no controller addresses from controller

Open BrandonIngalls opened this issue 4 years ago • 5 comments
trafficstars

Describe the bug

Boundary spams an error message to the logs every second or so when I combine worker/controller config into a single .hcl file.

Larger log output

{
  "id": "LDFrvD3obF",
  "source": "https://hashicorp.com/boundary/m01/m01",
  "specversion": "1.0",
  "type": "error",
  "data": {
    "error": "got no controller addresses from controller; possibly prior to first status save, not persisting",
    "error_fields": {},
    "id": "e_bJq2IPRwyG",
    "version": "v0.1",
    "op": "worker.(Worker).sendWorkerStatus"
  },
  "datacontentype": "application/cloudevents",
  "time": "2021-10-15T03:44:19.601038926Z"
}

Boundary functions just fine though, it's just annoying to have Boundary constantly spamming this error message.

To Reproduce

Steps to reproduce the behavior: TBD

Expected behavior

Boundary to continue spamming the same error message.

Additional context

  • I'm using publicly available ips/domains in my actual config, but I have removed the full domains from my config. https://vault.example.com:8200 => https://vault:8200
  • Example config main.hcl

BrandonIngalls avatar Oct 15 '21 03:10 BrandonIngalls

Thanks for opening this @BrandonIngalls - I'm going to work with the engineering team to repro.

malnick avatar Oct 21 '21 20:10 malnick

@malnick

I was able to reproduce the issue from scratch inside of a docker-compose example.

# make a working directory to test
[~]$ mkdir /tmp/issue

# Navigate to this folder
[~]$ cd /tmp/issue

# Create these two files
[~]$ cat << 'EOF' | base64 -d > boundary.hcl
ZGlzYWJsZV9tbG9jayA9IHRydWUKCmNvbnRyb2xsZXIgewogIG5hbWUgPSAibTAxIgogIGRlc2Ny
aXB0aW9uID0gInByaW1hcnkgY29udHJvbGxlciIKICBkYXRhYmFzZSB7CiAgICB1cmwgPSAicG9z
dGdyZXNxbDovL2JvdW5kYXJ5OnBhc3N3b3JkQGRiOjU0MzIvYm91bmRhcnk/c3NsbW9kZT1kaXNh
YmxlIgogIH0KfQoKbGlzdGVuZXIgInRjcCIgewogIHB1cnBvc2UgPSAiYXBpIgogIHRsc19kaXNh
YmxlID0gdHJ1ZQp9CgpsaXN0ZW5lciAidGNwIiB7CiAgcHVycG9zZSA9ICJjbHVzdGVyIgp9Cgp3
b3JrZXIgewogIG5hbWUgPSAibTAxIgogIGRlc2NyaXB0aW9uID0gIm1haW4gd29ya2VyIgp9Cgps
aXN0ZW5lciAidGNwIiB7CiAgcHVycG9zZSA9ICJwcm94eSIKfQoKa21zICJhZWFkIiB7CiAgICBw
dXJwb3NlID0gIndvcmtlci1hdXRoIgogICAgYWVhZF90eXBlID0gImFlcy1nY20iCiAgICBrZXlf
aWQgPSAid29ya2VyLWF1dGgiCiAgICBrZXkgPSAiQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFB
QUFBQUFBQUFBQUFBQUFBQT0iCn0KCmttcyAiYWVhZCIgewogICAgcHVycG9zZSA9ICJyb290Igog
ICAgYWVhZF90eXBlID0gImFlcy1nY20iCiAgICBrZXlfaWQgPSAicm9vdCIKICAgIGtleSA9ICJB
QUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBPSIKfQoKa21zICJhZWFk
IiB7CiAgICBwdXJwb3NlID0gInJlY292ZXJ5IgogICAgYWVhZF90eXBlID0gImFlcy1nY20iCiAg
ICBrZXlfaWQgPSAicmVjb3ZlcnkiCiAgICBrZXkgPSAiQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFB
QUFBQUFBQUFBQUFBQUFBQUFBQT0iCn0KCmttcyAiYWVhZCIgewogICAgcHVycG9zZSA9ICJjb25m
aWciCiAgICBhZWFkX3R5cGUgPSAiYWVzLWdjbSIKICAgIGtleV9pZCA9ICJjb25maWciCiAgICBr
ZXkgPSAiQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQT0iCn0K
EOF

[~]$ cat << 'EOF' | base64 -d > docker-compose.yml
LS0tCnZlcnNpb246ICIzLjgiCgpzZXJ2aWNlczoKICBkYjoKICAgIGltYWdlOiBwb3N0Z3Jlczox
MQogICAgZW52aXJvbm1lbnQ6CiAgICAtIFBPU1RHUkVTX1VTRVI9Ym91bmRhcnkKICAgIC0gUE9T
VEdSRVNfUEFTU1dPUkQ9cGFzc3dvcmQKICAgIC0gUE9TVEdSRVNfREI9Ym91bmRhcnkKICAgIHZv
bHVtZXM6CiAgICAtIGRiOi92YXIvbGliL3Bvc3RncmVzcWwvZGF0YTpydwoKICBib3VuZGFyeToK
ICAgIGltYWdlOiBoYXNoaWNvcnAvYm91bmRhcnk6MC42LjIKICAgIGNvbW1hbmQ6IGJvdW5kYXJ5
IHNlcnZlciAtY29uZmlnIC9ib3VuZGFyeS5oY2wKICAgIHZvbHVtZXM6CiAgICAtIC4vYm91bmRh
cnkuaGNsOi9ib3VuZGFyeS5oY2w6cm8KCnZvbHVtZXM6CiAgZGI6IH4K
EOF

# Validate the contents of the two newly created files
[~]$ cat *

# Start the database
[~]$ docker-compose up -d db
Creating network "issue_default" with the default driver
Creating volume "issue_db" with default driver
Creating issue_db_1 ... done

# Init boundary's database
[~]$ docker-compose run --rm boundary database init -config /boundary.hcl
Creating issue_boundary_run ... done
Migrations successfully run.
Global-scope KMS keys successfully created.
...

# Start boundary and watch the errors flow
[~]$ docker-compose up boundary
Creating issue_boundary_1 ... done
Attaching to issue_boundary_1
boundary_1  | ==> Boundary server configuration:
boundary_1  | 
boundary_1  |               [Config] AEAD Type: aes-gcm
boundary_1  |             [Recovery] AEAD Type: aes-gcm
boundary_1  |                 [Root] AEAD Type: aes-gcm
boundary_1  |          [Worker-Auth] AEAD Type: aes-gcm
boundary_1  |                              Cgo: disabled
boundary_1  |   Controller Public Cluster Addr: 127.0.0.1:9201
boundary_1  |                       Listener 1: tcp (addr: "127.0.0.1:9200", cors_allowed_headers: "[]", cors_allowed_origins: "[*]", cors_enabled: "true", max_request_duration: "1m30s", purpose: "api")
boundary_1  |                       Listener 2: tcp (addr: "127.0.0.1:9201", max_request_duration: "1m30s", purpose: "cluster")
boundary_1  |                       Listener 3: tcp (addr: "127.0.0.1:9202", max_request_duration: "1m30s", purpose: "proxy")
boundary_1  |                        Log Level: info
boundary_1  |                            Mlock: supported: true, enabled: false
boundary_1  |                          Version: Boundary v0.6.2
boundary_1  |                      Version Sha: 07c5c00f557ccc6d58ac065fa6c267f576860ac2
boundary_1  |         Worker Public Proxy Addr: 127.0.0.1:9202
boundary_1  | 
boundary_1  | ==> Boundary server started! Log data will stream in below:
boundary_1  | 
boundary_1  | {"id":"S9LjiNiDVw","source":"https://hashicorp.com/boundary/m01/m01","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).createClientConn","data":{"address":"127.0.0.1:9201","msg":"connected to controller"}},"datacontentype":"application/cloudevents","time":"2021-11-05T01:32:39.546680966Z"}
boundary_1  | {"id":"PO2XuqaEaE","source":"https://hashicorp.com/boundary/m01/m01","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"controller.(interceptingListener).Accept","data":{"msg":"worker successfully authed","name":"m01"}},"datacontentype":"application/cloudevents","time":"2021-11-05T01:32:39.551215141Z"}

BrandonIngalls avatar Nov 05 '21 01:11 BrandonIngalls

Sorry for the tardy response here @BrandonIngalls - since we've made several releases of Boundary since I last checked on this, I'm wondering what the most recent state is here?

malnick avatar Apr 25 '22 16:04 malnick

I followed the steps I detailed in https://github.com/hashicorp/boundary/issues/1603#issuecomment-961560891 but updated the docker image for boundary to 0.7.6.

[/tmp/issue]$ docker-compose up boundary
Creating issue_boundary_1 ... done
Attaching to issue_boundary_1
boundary_1  | Couldn't start Boundary with IPC_LOCK. Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK
boundary_1  | ==> Boundary server configuration:
boundary_1  | 
boundary_1  |               [Config] AEAD Type: aes-gcm
boundary_1  |             [Recovery] AEAD Type: aes-gcm
boundary_1  |                 [Root] AEAD Type: aes-gcm
boundary_1  |          [Worker-Auth] AEAD Type: aes-gcm
boundary_1  |                              Cgo: disabled
boundary_1  |   Controller Public Cluster Addr: 127.0.0.1:9201
boundary_1  |                       Listener 1: tcp (addr: "127.0.0.1:9200", cors_allowed_headers: "[]", cors_allowed_origins: "[*]", cors_enabled: "true", max_request_duration: "1m30s", purpose: "api")
boundary_1  |                       Listener 2: tcp (addr: "127.0.0.1:9201", max_request_duration: "1m30s", purpose: "cluster")
boundary_1  |                       Listener 3: tcp (addr: "127.0.0.1:9202", max_request_duration: "1m30s", purpose: "proxy")
boundary_1  |                        Log Level: info
boundary_1  |                            Mlock: supported: true, enabled: false
boundary_1  |                          Version: Boundary v0.7.6
boundary_1  |                      Version Sha: 0ffa45c5c987b65d01f9f644790ecc761867c2b6
boundary_1  |         Worker Public Proxy Addr: 127.0.0.1:9202
boundary_1  | 
boundary_1  | ==> Boundary server started! Log data will stream in below:
...
boundary_1  | {"id":"XBWkXc4pkv","source":"https://hashicorp.com/boundary/m01/m01","specversion":"1.0","type":"error","data":{"error":"got no controller addresses from controller; possibly prior to first status save, not persisting","error_fields":{},"id":"e_EbVmL9tr6e","version":"v0.1","op":"worker.(Worker).sendWorkerStatus"},"datacontentype":"application/cloudevents","time":"2022-04-26T00:24:34.292481187Z"}
boundary_1  | {"id":"PD00VnGN5c","source":"https://hashicorp.com/boundary/m01/m01","specversion":"1.0","type":"error","data":{"error":"got no controller addresses from controller; possibly prior to first status save, not persisting","error_fields":{},"id":"e_kI0lIoddZN","version":"v0.1","op":"worker.(Worker).sendWorkerStatus"},"datacontentype":"application/cloudevents","time":"2022-04-26T00:24:36.602393536Z"}
boundary_1  | {"id":"nqCDQH4Hot","source":"https://hashicorp.com/boundary/m01/m01","specversion":"1.0","type":"error","data":{"error":"got no controller addresses from controller; possibly prior to first status save, not persisting","error_fields":{},"id":"e_rQvGasg9vQ","version":"v0.1","op":"worker.(Worker).sendWorkerStatus"},"datacontentype":"application/cloudevents","time":"2022-04-26T00:24:38.891275625Z"}
...

So it looks like I'm still having the same issue.

BrandonIngalls avatar Apr 26 '22 00:04 BrandonIngalls

I believe the issue is due to the worker and controller being configured with the same name. Even though they are running in the same process I think they need unique names. The errors go away if I change the config to have:

index 78aadec..5cc3f5e 100644
--- a/boundary.hcl
+++ b/boundary.hcl
@@ -22,7 +22,7 @@ listener "tcp" {
 }

 worker {
-  name        = "m01"
+  name        = "w01"
   description = "main worker"
 }

tmessi avatar May 25 '22 18:05 tmessi

@BrandonIngalls did this fix your issue? If so we will close this ticket.

covetocove avatar Dec 01 '22 21:12 covetocove

@BrandonIngalls did this fix your issue? If so we will close this ticket.

It did, but instead of just closing the ticket would it make sense to add some sort of a sanity check when boundary loads the config that gives the user a clear warning when the two values match?

BrandonIngalls avatar Dec 02 '22 01:12 BrandonIngalls