infro-core
infro-core copied to clipboard
ArgoCD Connection Failure
I'm trying to troubleshoot the following: "error":"failed to connect to argocd: read tcp 10.42.5.46:37408->10.43.17.253:443: read: connection reset by peer"
Infro-core is detecting the PR creation, seems to be having a problem with the ArgoCD connection, from the logs:
{"level":"error","ts":1719065287.3195426,"caller":"infro/executor.go:79","msg":"failed to execute dry runs","owner":"reefland","repo":"k3s-argocd","revision":"e4a071639bae6117bbd304bfac375194cccc0fe2","pullNumber":1598,"error":"failed to connect to argocd: read tcp 10.42.5.46:37408->10.43.17.253:443: read: connection reset by peer","stacktrace":"github.com/infro-io/infro-core/pkg/infro.(*Executor).Comment\n\t/go/pkg/infro/executor.go:79\ngithub.com/infro-io/infro-core/pkg/infro.(*Executor).CommentOnPullRequests\n\t/go/pkg/infro/executor.go:129\ngithub.com/infro-io/infro-core/pkg/infro.(*Executor).Poll\n\t/go/pkg/infro/executor.go:112\ngithub.com/infro-io/infro-core/cmd/poll.NewCommand.func1\n\t/go/cmd/poll/poll.go:33\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039\nmain.main\n\t/go/cmd/main.go:6\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:271"}
{"level":"info","ts":1719065287.320251,"caller":"infro/executor.go:92","msg":"no diffs, skipping comment","owner":"reefland","repo":"k3s-argocd","revision":"e4a071639bae6117bbd304bfac375194cccc0fe2","pullNumber":1598}
{"level":"info","ts":1719065297.327372,"caller":"github/client.go:79","msg":"finding pull requests","owner":"reefland","updatedSince":"2024-06-22T14:08:07"}
{"level":"error","ts":1719065297.512019,"caller":"infro/executor.go:79","msg":"failed to execute dry runs","owner":"reefland","repo":"k3s-argocd","revision":"e4a071639bae6117bbd304bfac375194cccc0fe2","pullNumber":1598,"error":"failed to connect to argocd: read tcp 10.42.5.46:52140->10.43.17.253:443: read: connection reset by peer","stacktrace":"github.com/infro-io/infro-core/pkg/infro.(*Executor).Comment\n\t/go/pkg/infro/executor.go:79\ngithub.com/infro-io/infro-core/pkg/infro.(*Executor).CommentOnPullRequests\n\t/go/pkg/infro/executor.go:129\ngithub.com/infro-io/infro-core/pkg/infro.(*Executor).Poll\n\t/go/pkg/infro/executor.go:112\ngithub.com/infro-io/infro-core/cmd/poll.NewCommand.func1\n\t/go/cmd/poll/poll.go:33\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039\nmain.main\n\t/go/cmd/main.go:6\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:271"}
{"level":"info","ts":1719065297.5121255,"caller":"infro/executor.go:92","msg":"no diffs, skipping comment","owner":"reefland","repo":"k3s-argocd","revision":"e4a071639bae6117bbd304bfac375194cccc0fe2","pullNumber":1598}
The argocd-cm
defined the account used to connect (named github
):
accounts.github: apiKey
The argocd-rbac-cm
defines the policy for the account:
data:
policy.csv: |
p, role:readonly, applications, get, *, allow
p, role:readonly, projects, get, *, allow
g, github, role:readonly
policy.default: ""
policy.matchMode: glob
scopes: '[groups]'
I used the Generate Key in ArgoCD GUI and put it into the secret which looks like:
My infro-secret
has the following:
owner: reefland
deployers:
- type: argocd
name: my-cluster
endpoint: "argocd-server.argocd.svc.cluster.local"
authtoken: "<redacted>"
vcs:
type: github
authtoken: "<redacted>"
The endpoint looks correct, I'm not sure what else to look for:
Environment:
$ k get svc -n argocd
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-application-controller-metrics ClusterIP 10.43.186.245 <none> 8082/TCP 2y34d
argocd-applicationset-controller ClusterIP 10.43.239.132 <none> 7000/TCP 2y35d
argocd-applicationset-controller-metrics ClusterIP 10.43.182.107 <none> 8080/TCP 2y27d
argocd-notifications-controller-metrics ClusterIP 10.43.249.163 <none> 9001/TCP 2y27d
argocd-repo-server ClusterIP 10.43.160.35 <none> 8081/TCP 2y35d
argocd-repo-server-metrics ClusterIP 10.43.50.19 <none> 8084/TCP 2y34d
argocd-server ClusterIP 10.43.17.253 <none> 80/TCP,443/TCP 2y35d
argocd-server-metrics ClusterIP 10.43.140.248 <none> 8083/TCP 2y34d
$ k get pods -n argocd -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
argocd-application-controller-0 1/1 Running 0 4h42m 10.42.6.246 dldsk01 <none> <none>
argocd-application-controller-1 1/1 Running 0 4h42m 10.42.5.80 k3s06 <none> <none>
argocd-applicationset-controller-7c8b796785-642s9 1/1 Running 0 4h42m 10.42.0.91 k3s01 <none> <none>
argocd-notifications-controller-846f589764-n24s4 1/1 Running 0 4h42m 10.42.0.92 k3s01 <none> <none>
argocd-redis-secret-init-f9tpq 0/1 Completed 0 4h42m 10.42.0.90 k3s01 <none> <none>
argocd-repo-server-6566b54f88-42tdt 1/1 Running 0 4h42m 10.42.0.93 k3s01 <none> <none>
argocd-repo-server-6566b54f88-ls84w 1/1 Running 0 4h42m 10.42.5.79 k3s06 <none> <none>
argocd-server-86d456cd87-lcdbs 1/1 Running 0 4h42m 10.42.6.245 dldsk01 <none> <none>
argocd-server-86d456cd87-znlk6 1/1 Running 0 4h42m 10.42.5.81 k3s06 <none> <none>