cloud-provider-alibaba-cloud
cloud-provider-alibaba-cloud copied to clipboard
apply model error: get load balancer attribute from cloud
What happened:
After deploying aliyun-cloud-provider through DaemonSet, after starting the gateway, it starts to report errors
E0523 04:20:05.426765 1 service_controller.go:229] service-controller "msg"="reconcile loadbalancer failed" "error"="apply model error: get load balancer attribute from cloud, error: Post "http://slb-vpc.cn-shanghai.aliyuncs.com/?AccessKeyId=xxx&Action=DescribeLoadBalancerAttribute&Format=JSON&LoadBalancerId=lb-uf6ouqwyqqcp9mlfznhdh&RegionId=cn-shanghai&SecurityToken=&Signature=6OQUclFmrYATKmmMkVNukMxZWsA%3D&SignatureMethod=HMAC-SHA1&SignatureNonce=e5bd1d352eb62b8c8de2f202cd7b4b04&SignatureType=&SignatureVersion=1.0&Timestamp=2022-05-23T04%3A20%3A00Z&Version=2014-05-15": dial tcp: i/o timeout" "service"="kubesphere-controls-system/kubesphere-router-kubesphere-system"
E0523 04:20:05.426841 1 controller.go:317] controller/service-controller "msg"="Reconciler error" "error"="apply model error: get load balancer attribute from cloud, error: Post "http://slb-vpc.cn-shanghai.aliyuncs.com/?AccessKeyId=xxx&Action=DescribeLoadBalancerAttribute&Format=JSON&LoadBalancerId=lb-uf6ouqwyqqcp9mlfznhdh&RegionId=cn-shanghai&SecurityToken=&Signature=6OQUclFmrYATKmmMkVNukMxZWsA%3D&SignatureMethod=HMAC-SHA1&SignatureNonce=e5bd1d352eb62b8c8de2f202cd7b4b04&SignatureType=&SignatureVersion=1.0&Timestamp=2022-05-23T04%3A20%3A00Z&Version=2014-05-15": dial tcp: i/o timeout" "name"="kubesphere-router-kubesphere-system" "namespace"="kubesphere-controls-system"
So I copied the request to the ECS instance to make the request, it is no problem
What you expected to happen:
The gateway can use Alibaba Cloud SLB normally
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.