Old Pods are not terminated during revision while autoscaling class is hpa
What version of Knative?
v1.17
Expected Behavior
I have a ksvc using hpa for autoscaling class. After changes made to ksvc, a new revision is generated, I'd expect the pods with old revision to be terminated after the new revision is started.
Originally, I thought it is a problem when I switching from kpa to hpa autoscaling class, now I realized that it is a problem as long as hpa is the autoscaling class.
Here is my hpa to kpa switching case
autoscaling.knative.dev/class: "hpa.autoscaling.knative.dev"
I updated the ksvc to change the annotation to
autoscaling.knative.dev/class: "kpa.autoscaling.knative.dev"
A new revision (00002) is created with kpa, that's good. I'd hope the pods created by 00001 to be terminated after 00002 pods are fully started.
Actual Behavior
But those 00001 pods stay there.
This actually to all revision when hpa is the autoscaling class. :(
but, it works fine with kpa. If I switch from kpa to hpa or make changes when kpa is the autoscaling class, the pods from old version are terminated right after pods from new version are running
Steps to Reproduce the Problem
apply the following ksvc, after pods are running, edit the ksvc and change the hpa to kpa or add some annotation (e.g. "test": "trigger") to spec.template.metadata.annotations
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: scaler
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/class: "hpa.autoscaling.knative.dev"
spec:
containers:
- image: ghcr.io/knative/helloworld-go:latest
it works fine if kpa is the autoscaling class
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.