argo-rollouts
argo-rollouts copied to clipboard
kubectl argo rollouts dashboard OOM kill
Summary
What happened/what you expected to happen?
kubectl argo rollouts dashboard
high memory usage.
before long time, this pod OOM kill!!
Diagnostics
kubectl-argo-rollouts: v1.2.1+51c874c
BuildDate: 2022-05-13T20:40:45Z
GitCommit: 51c874cb18e6adccf677766ac561c3dbf69a8ec1
GitTreeState: clean
GoVersion: go1.17.6
Compiler: gc
Platform: linux/amd64
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.
@kzcPo Could you provide steps to reproduce? How many rollouts Manifest for rollout
@kzcPo Could you provide steps to reproduce? How many rollouts Manifest for rollout
-
I run a command
kubectl argo rollouts dashboard
. -
My program calls API every 10s.
/api/v1/rollouts/online/info
-
Through dashboash, you can see that the response time of this interface is 3S.
-
At this time, the pod CPU continues to grow until the OOM kill.
It also takes 3S to execute by command.
[root@argo-rollouts-dashboard-5cf947cdc9-ltkmn /]# time kubectl argo rollouts get rollout eff-noahbe -n online
Name: eff-noahbe
Namespace: online
Status: ✔ Healthy
Strategy: BlueGreen
Images: xxxx.com/taqu/eff-noahbe:online_202206141748_release_v2 (stable, active)
Replicas:
Desired: 1
Current: 1
Updated: 1
Ready: 1
Available: 1
NAME KIND STATUS AGE INFO
⟳ eff-noahbe Rollout ✔ Healthy 14d
├──# revision:7
│ └──⧉ eff-noahbe-79dc855c76 ReplicaSet ✔ Healthy 15h stable,active
│ └──□ eff-noahbe-79dc855c76-rbqs5 Pod ✔ Running 15h ready:1/1
├──# revision:6
│ └──⧉ eff-noahbe-7b7fbc6cf6 ReplicaSet • ScaledDown 4d18h
├──# revision:5
│ └──⧉ eff-noahbe-74675b9cff ReplicaSet • ScaledDown 4d22h
├──# revision:4
│ └──⧉ eff-noahbe-5ccfdd6d88 ReplicaSet • ScaledDown 5d18h
├──# revision:3
│ └──⧉ eff-noahbe-7fb8d7ffbd ReplicaSet • ScaledDown 7d22h
├──# revision:2
│ └──⧉ eff-noahbe-bbdd56bf ReplicaSet • ScaledDown 12d
└──# revision:1
└──⧉ eff-noahbe-59b98696c7 ReplicaSet • ScaledDown 14d
real 0m2.956s
user 0m3.185s
sys 0m0.743s
Is there an API server that can be provided for my program calls? Can this query be optimized for time consumption?
This issue is stale because it has been open 60 days with no activity.
This issue is stale because it has been open 60 days with no activity.
你好,我已经收到邮件!
This issue is stale because it has been open 60 days with no activity.
你好,我已经收到邮件!
This issue is stale because it has been open 60 days with no activity.
你好,我已经收到邮件!
This issue is stale because it has been open 60 days with no activity.
你好,我已经收到邮件!
This issue is stale because it has been open 60 days with no activity.
你好,我已经收到邮件!
Hello, guys! Are there any updates?
你好,我已经收到邮件!
Hello! Are there any updates. I have also observed a similar behavior with Argo Rollouts Dashboard being OOM killed due to rising memory. We expect that there is a memory leak.