Enable `apply` to replace `create` in manifest deployment
🚀 Feature Description and Motivation
create is very tricky and not good for version upgrade. We get chance to lose the fixed ip. Let's see if there's any options
Use Case
No response
Proposed Solution
No response
dependencies
The CustomResourceDefinition "envoyproxies.gateway.envoyproxy.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
config
Error from server (Invalid): error when creating "config/default": CustomResourceDefinition.apiextensions.k8s.io "rayjobs.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
Error from server (Invalid): error when creating "config/default": CustomResourceDefinition.apiextensions.k8s.io "rayservices.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
Seems just this three CRDs block to use apply
- Option 1: Update
crd:maxDescLen=0 - Option 2: use
--server-side
--server-side
dependency
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using apps/v1: .spec.template.spec.containers[name="envoy-gateway"].resources.limits.memory
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
config
Apply failed with 2 conflicts: conflicts with "kubectl-client-side-apply" using apps/v1:
- .spec.template.spec.containers[name="gateway-plugin"].env[name="POD_NAME"].valueFrom.fieldRef
- .spec.template.spec.containers[name="gateway-plugin"].env[name="POD_NAMESPACE"].valueFrom.fieldRef
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 2 conflicts: conflicts with "kubectl-client-side-apply" using apps/v1:
- .spec.template.spec.containers[name="gateway-users"].env[name="POD_NAME"].valueFrom.fieldRef
- .spec.template.spec.containers[name="gateway-users"].env[name="POD_NAMESPACE"].valueFrom.fieldRef
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using gateway.envoyproxy.io/v1alpha1: .spec.extProc
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 2 conflicts: conflicts with "kubectl-client-side-apply" using gateway.networking.k8s.io/v1:
- .spec.parentRefs
- .spec.rules
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
@Jeffwan , i got the same issue. How to fix it?
aluo@tw020:~/aibrix$ kubectl apply -f https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml namespace/aibrix-system created customresourcedefinition.apiextensions.k8s.io/kvcaches.orchestration.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/modeladapters.model.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/rayclusterfleets.orchestration.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/rayclusterreplicasets.orchestration.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/rayclusters.ray.io created serviceaccount/aibrix-controller-manager created serviceaccount/aibrix-gateway-plugins created serviceaccount/aibrix-gpu-optimizer-sa created serviceaccount/aibrix-kuberay-operator created role.rbac.authorization.k8s.io/aibrix-controller-manager-leader-election-role created role.rbac.authorization.k8s.io/aibrix-kuberay-operator-leader-election created clusterrole.rbac.authorization.k8s.io/aibrix-autoscaling-podautoscaler-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-autoscaling-podautoscaler-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-controller-manager-role created clusterrole.rbac.authorization.k8s.io/aibrix-gateway-plugins-role created clusterrole.rbac.authorization.k8s.io/aibrix-gpu-optimizer-clusterrole created clusterrole.rbac.authorization.k8s.io/aibrix-kuberay-operator created clusterrole.rbac.authorization.k8s.io/aibrix-model-modeladapter-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-model-modeladapter-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-kvcache-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-kvcache-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-rayclusterfleet-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-rayclusterfleet-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-rayclusterreplicaset-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-rayclusterreplicaset-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-rayjob-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-rayjob-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-rayservice-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-rayservice-viewer-role created rolebinding.rbac.authorization.k8s.io/aibrix-controller-manager-leader-election-rolebinding created rolebinding.rbac.authorization.k8s.io/aibrix-kuberay-operator-leader-election created clusterrolebinding.rbac.authorization.k8s.io/aibrix-controller-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/aibrix-gateway-plugins-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/aibrix-gpu-optimizer-clusterrole-binding created clusterrolebinding.rbac.authorization.k8s.io/aibrix-kuberay-operator created service/aibrix-controller-manager-metrics-service created service/aibrix-gateway-plugins created service/aibrix-gpu-optimizer created service/aibrix-kuberay-operator created service/aibrix-metadata-service created service/aibrix-redis-master created deployment.apps/aibrix-controller-manager created deployment.apps/aibrix-gateway-plugins created deployment.apps/aibrix-gpu-optimizer created deployment.apps/aibrix-kuberay-operator created deployment.apps/aibrix-metadata-service created deployment.apps/aibrix-redis-master created clienttrafficpolicy.gateway.envoyproxy.io/aibrix-client-connection-buffersize created envoyextensionpolicy.gateway.envoyproxy.io/aibrix-gateway-plugins-extension-policy created envoypatchpolicy.gateway.envoyproxy.io/aibrix-epp created gateway.gateway.networking.k8s.io/aibrix-eg created gatewayclass.gateway.networking.k8s.io/aibrix-eg created httproute.gateway.networking.k8s.io/aibrix-reserved-router created Error from server (Invalid): error when creating "https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml": CustomResourceDefinition.apiextensions.k8s.io "rayjobs.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes Error from server (Invalid): error when creating "https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml": CustomResourceDefinition.apiextensions.k8s.io "rayservices.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
dependencies
The CustomResourceDefinition "envoyproxies.gateway.envoyproxy.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytesconfig
Error from server (Invalid): error when creating "config/default": CustomResourceDefinition.apiextensions.k8s.io "rayjobs.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes Error from server (Invalid): error when creating "config/default": CustomResourceDefinition.apiextensions.k8s.io "rayservices.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytesSeems just this three CRDs block to use apply
@Jeffwan , i got the same issue. How to fix it?
aluo@tw020:~/aibrix$ kubectl apply -f https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml namespace/aibrix-system created customresourcedefinition.apiextensions.k8s.io/kvcaches.orchestration.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/modeladapters.model.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/rayclusterfleets.orchestration.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/rayclusterreplicasets.orchestration.aibrix.ai created customresourcedefinition.apiextensions.k8s.io/rayclusters.ray.io created serviceaccount/aibrix-controller-manager created serviceaccount/aibrix-gateway-plugins created serviceaccount/aibrix-gpu-optimizer-sa created serviceaccount/aibrix-kuberay-operator created role.rbac.authorization.k8s.io/aibrix-controller-manager-leader-election-role created role.rbac.authorization.k8s.io/aibrix-kuberay-operator-leader-election created clusterrole.rbac.authorization.k8s.io/aibrix-autoscaling-podautoscaler-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-autoscaling-podautoscaler-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-controller-manager-role created clusterrole.rbac.authorization.k8s.io/aibrix-gateway-plugins-role created clusterrole.rbac.authorization.k8s.io/aibrix-gpu-optimizer-clusterrole created clusterrole.rbac.authorization.k8s.io/aibrix-kuberay-operator created clusterrole.rbac.authorization.k8s.io/aibrix-model-modeladapter-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-model-modeladapter-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-kvcache-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-kvcache-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-rayclusterfleet-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-rayclusterfleet-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-rayclusterreplicaset-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-orchestration-rayclusterreplicaset-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-rayjob-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-rayjob-viewer-role created clusterrole.rbac.authorization.k8s.io/aibrix-rayservice-editor-role created clusterrole.rbac.authorization.k8s.io/aibrix-rayservice-viewer-role created rolebinding.rbac.authorization.k8s.io/aibrix-controller-manager-leader-election-rolebinding created rolebinding.rbac.authorization.k8s.io/aibrix-kuberay-operator-leader-election created clusterrolebinding.rbac.authorization.k8s.io/aibrix-controller-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/aibrix-gateway-plugins-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/aibrix-gpu-optimizer-clusterrole-binding created clusterrolebinding.rbac.authorization.k8s.io/aibrix-kuberay-operator created service/aibrix-controller-manager-metrics-service created service/aibrix-gateway-plugins created service/aibrix-gpu-optimizer created service/aibrix-kuberay-operator created service/aibrix-metadata-service created service/aibrix-redis-master created deployment.apps/aibrix-controller-manager created deployment.apps/aibrix-gateway-plugins created deployment.apps/aibrix-gpu-optimizer created deployment.apps/aibrix-kuberay-operator created deployment.apps/aibrix-metadata-service created deployment.apps/aibrix-redis-master created clienttrafficpolicy.gateway.envoyproxy.io/aibrix-client-connection-buffersize created envoyextensionpolicy.gateway.envoyproxy.io/aibrix-gateway-plugins-extension-policy created envoypatchpolicy.gateway.envoyproxy.io/aibrix-epp created gateway.gateway.networking.k8s.io/aibrix-eg created gatewayclass.gateway.networking.k8s.io/aibrix-eg created httproute.gateway.networking.k8s.io/aibrix-reserved-router created Error from server (Invalid): error when creating "https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml": CustomResourceDefinition.apiextensions.k8s.io "rayjobs.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes Error from server (Invalid): error when creating "https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml": CustomResourceDefinition.apiextensions.k8s.io "rayservices.ray.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
just use this, it should work:
Install component dependencies
kubectl create -k "github.com/vllm-project/aibrix/config/dependency?ref=v0.2.0"
Install aibrix components
kubectl create -k "github.com/vllm-project/aibrix/config/overlays/release?ref=v0.2.0"
@andyluo7 due to some dependency issues, it's not easy to replace to apply that easily, we will talk with maintainers or replace to our own distribution later. Please stick to create at this moment.
@donwany Thanks a lot!
@Jeffwan @andyluo7 On my end, I was able to get through those issues by using flags --server-side=true and --force-conflicts. Example:
kubectl apply --server-side=true --force-conflicts -f https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-dependency-v0.2.0.yaml
kubectl apply --server-side=true --force-conflicts -f https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml
[AMD Official Use Only - AMD Internal Distribution Only]
Nice. Will try.
Get Outlook for iOShttps://aka.ms/o0ukef
From: Maria Camila Ruiz Cardenas @.>
Sent: Wednesday, February 26, 2025 12:01:25 PM
To: vllm-project/aibrix @.>
Cc: Luo, Andy @.>; Mention @.>
Subject: Re: [vllm-project/aibrix] Enable apply to replace create in manifest deployment (Issue #519)
Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
@Jeffwanhttps://github.com/Jeffwan @andyluo7https://github.com/andyluo7 On my end, I was able to get through those issues by using flags --server-side=true and --force-conflicts. Example:
kubectl apply --server-side=true --force-conflicts -f https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-dependency-v0.2.0.yaml kubectl apply --server-side=true --force-conflicts -f https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml
— Reply to this email directly, view it on GitHubhttps://github.com/vllm-project/aibrix/issues/519#issuecomment-2683835935, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AKNRMDHZJN3TORIVFR6BJFD2RU4BLAVCNFSM6AAAAABTMPFDPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMOBTHAZTKOJTGU. You are receiving this because you were mentioned.Message ID: @.***>
[mra-ruiz]mra-ruiz left a comment (vllm-project/aibrix#519)https://github.com/vllm-project/aibrix/issues/519#issuecomment-2683835935
@Jeffwanhttps://github.com/Jeffwan @andyluo7https://github.com/andyluo7 On my end, I was able to get through those issues by using flags --server-side=true and --force-conflicts. Example:
kubectl apply --server-side=true --force-conflicts -f https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-dependency-v0.2.0.yaml kubectl apply --server-side=true --force-conflicts -f https://github.com/vllm-project/aibrix/releases/download/v0.2.0/aibrix-core-v0.2.0.yaml
— Reply to this email directly, view it on GitHubhttps://github.com/vllm-project/aibrix/issues/519#issuecomment-2683835935, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AKNRMDHZJN3TORIVFR6BJFD2RU4BLAVCNFSM6AAAAABTMPFDPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMOBTHAZTKOJTGU. You are receiving this because you were mentioned.Message ID: @.***>