Error when creating Knative Service: Admission webhook denied request due to metadata.name change validation
Description: I encountered an issue when trying to create a new Knative Service. Even though the service is newly created, I receive an error related to the validation of the metadata.name in spec.template. The error suggests that the service is attempting a change without a corresponding name update, which is unexpected for a newly created resource.
Error Message:
Failed to create Knative Service: admission webhook "validation.webhook.serving.knative.dev" denied the request: validation failed: Saw the following changes without a name change (-old +new): spec.template.metadata.name...
Expected Behavior: As this is a new service creation, there should be no conflict or validation errors on metadata.name in spec.template.
Steps to Reproduce:
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" if _, err := controllerutil.CreateOrUpdate(ctx, ksb.KnativeClient, knservice, ksb.MutateKnServiceFn(knservice, sapp)); err != nil { // 如果不是资源版本冲突错误,记录事件和日志 if !apierrors.IsConflict(err) { ksb.EventRecorder.RecordEventf(sapp, corev1.EventTypeWarning, util.EventReasonFailedCreateKsvc, util.FailedCreateKsvcMsg, err.Error()) klog.Errorf("Failed to create or update knservice for app: %s, error: %v", sapp.Name, err) } // 返回错误,触发重试机制 return err } If applicable, mention whether CreateOrUpdate with mutate functions was used and if the mutation involved changes to metadata. Environment:
Knative version: v1.11.4 Kubernetes version: Kubernetes v1.25.14 Client tools or libraries:
Any insight into this validation behavior or recommended configurations to avoid this error would be helpful. Thanks!
Hi @helloxjade, the Knative version referenced is not supported by the community any more. Could you try with some later version? I am not sure I understand the steps to reproduce this, could you clarify this?
Btw the error above usually happens when you want to update a knative service that has a user defined name as follows and then you update it but you forget to update the spec.template.metadata.name field as well:
kind: Service
metadata:
name: helloworld-go
spec:
template:
metadata:
name: helloworld-go-1
labels:
app: helloworld-go
If I update the above eg. add a new label test and leave the name as helloworld-go-1 this will fail:
ied the request: validation failed: Saw the following changes without a name change (-old +new): spec.template.metadata.name
{*v1.RevisionTemplateSpec}.ObjectMeta.Labels["test"]:
+: "test"
The reason is that in the BYO mode revision names are left to the user to be managed (no auto-generation). Could you verify that the resource is actually new and there are no relics around?
Hi @helloxjade, the Knative version referenced is not supported by the community any more. Could you try with some later version? I am not sure I understand the steps to reproduce this, could you clarify this?
Btw the error above usually happens when you want to update a knative service that has a user defined name as follows and then you update it but you forget to update the
spec.template.metadata.namefield as well:kind: Service metadata: name: helloworld-go spec: template: metadata: name: helloworld-go-1 labels: app: helloworld-goIf I update the above eg. add a new label
testand leave the name ashelloworld-go-1this will fail:ied the request: validation failed: Saw the following changes without a name change (-old +new): spec.template.metadata.name {*v1.RevisionTemplateSpec}.ObjectMeta.Labels["test"]: +: "test"The reason is that in the BYO mode revision names are left to the user to be managed (no auto-generation). Could you verify that the resource is actually new and there are no relics around?
Here is the error log I'm encountering:
1.7327580730319684e+09 DEBUG events Knative service Failed to create :admission webhook "validation.webhook.serving.knative.dev" denied the request: validation failed: Saw the following changes without a name change (-old +new): spec.template.metadata.name
{*v1.RevisionTemplateSpec}.Spec.PodSpec.Containers[0].ReadinessProbe:
-: "&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 0 },Host:,},GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:0,PeriodSeconds:0,SuccessThreshold:1,FailureThreshold:0,TerminationGracePeriodSeconds:nil,}"
+: "nil"
{*v1.RevisionTemplateSpec}.Spec.PodSpec.EnableServiceLinks:
-: "0xc000c648ea"
+: "<nil>"
The issue seems to be related to the logic in the applyDefault function in the revision_defaults.go file. Here’s what happens in the code:
func (rs *RevisionSpec) applyDefault(ctx context.Context, container *corev1.Container, cfg *config.Config) {
if container.Resources.Requests == nil {
container.Resources.Requests = corev1.ResourceList{}
}
if container.Resources.Limits == nil {
container.Resources.Limits = corev1.ResourceList{}
}
for _, r := range []struct {
Name corev1.ResourceName
Request *resource.Quantity
Limit *resource.Quantity
}{{
Name: corev1.ResourceCPU,
Request: cfg.Defaults.RevisionCPURequest,
Limit: cfg.Defaults.RevisionCPULimit,
}, {
Name: corev1.ResourceMemory,
Request: cfg.Defaults.RevisionMemoryRequest,
Limit: cfg.Defaults.RevisionMemoryLimit,
}, {
Name: corev1.ResourceEphemeralStorage,
Request: cfg.Defaults.RevisionEphemeralStorageRequest,
Limit: cfg.Defaults.RevisionEphemeralStorageLimit,
}} {
if _, ok := container.Resources.Requests[r.Name]; !ok && r.Request != nil {
container.Resources.Requests[r.Name] = *r.Request
}
if _, ok := container.Resources.Limits[r.Name]; !ok && r.Limit != nil {
container.Resources.Limits[r.Name] = *r.Limit
}
}
// If there are multiple containers then default probes will be applied to the container where user specified PORT
// default probes will not be applied for non serving containers
if len(rs.PodSpec.Containers) == 1 || len(container.Ports) != 0 {
rs.applyProbes(container)
}
if rs.PodSpec.EnableServiceLinks == nil && apis.IsInCreate(ctx) {
rs.PodSpec.EnableServiceLinks = cfg.Defaults.EnableServiceLinks
}
vNames := make(sets.String)
for _, v := range rs.PodSpec.Volumes {
if v.EmptyDir != nil || v.PersistentVolumeClaim != nil {
vNames.Insert(v.Name)
}
}
vms := container.VolumeMounts
for i := range vms {
if !vNames.Has(vms[i].Name) {
vms[i].ReadOnly = true
}
}
}
func (*RevisionSpec) applyProbes(container *corev1.Container) {
if container.ReadinessProbe == nil {
container.ReadinessProbe = &corev1.Probe{}
}
if container.ReadinessProbe.TCPSocket == nil &&
container.ReadinessProbe.HTTPGet == nil &&
container.ReadinessProbe.Exec == nil {
container.ReadinessProbe.TCPSocket = &corev1.TCPSocketAction{}
}
if container.ReadinessProbe.SuccessThreshold == 0 {
container.ReadinessProbe.SuccessThreshold = 1
}
// Apply k8s defaults when ReadinessProbe.PeriodSeconds property is set
if container.ReadinessProbe.PeriodSeconds != 0 {
if container.ReadinessProbe.FailureThreshold == 0 {
container.ReadinessProbe.FailureThreshold = 3
}
if container.ReadinessProbe.TimeoutSeconds == 0 {
container.ReadinessProbe.TimeoutSeconds = 1
}
}
}`
```
I don't set the value of ReadinessProbe and EnableServiceLinks when create knative service
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
I don't set the value of ReadinessProbe and EnableServiceLinks when create knative service
We default enableServiceLinks to false because it's terrible lol.
I don't set the value of ReadinessProbe and EnableServiceLinks when create knative service
Can you try testing with a newer version? And what is the yaml you are trying to apply?
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.