kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

add big notice in readme and docs that versions shipped with kubectl are outdated

Open marchenko1985 opened this issue 2 years ago • 6 comments

Eschewed features

  • [X] This issue is not requesting templating, unstuctured edits, build-time side-effects from args or env vars, or any other eschewed feature.

What would you like to have added?

Imagine the feeling of newcomer

So I have been landed to docs describing of how can I customise manifests

Here is an working example from docs

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80

increase_replicas.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 100500

kustomization.yml

resources:
- deployment.yml
patches:
  - path: increase_replicas.yml

And my attempt to check if everything works without touching cluster

kubectl apply -k . --dry-run=client -o yaml | grep -q 100500

Ok, understood, everything works and seems to be really clear and simple

Let's pretend I wish to modify cert-manager in similar fashion

So I have replaced deployment.yaml in resources section to cert-manager url, as well as changed my patch file, aka:

increase_replicas.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cert-manager
spec:
  replicas: 100500

kustomization.yml

resources:
- https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
patches:
  - path: increase_replicas.yml

and suddenly nothing works, the error is:

error: no matches for Id Deployment.v1.apps/cert-manager.[noNs]; failed to find unique target for patch Deployment.v1.apps/cert-manager.[noNs]

I was thinking, ok, may be it is related to the fact that resources are in remote url, but no, did tried working example and extracting deployment to public github gits - everything works as expected

Then I was thinking that may be it is related to the fact that remote url contains many resources - so just copied definition of cert manager to local file - but still nothing

Then I was trying to figure out the fact that error message saing noNs which is obviously relates to namespace, so started to try adding it in patch file, in target of kustomization patch, in kustomization namespace and all their variations - and gues what - error gone, BUT, the resource were not patched 🤷‍♂️

You can guess my feelings at this time already 😂

The worst thing is that everything should work according to docs, and no matter what crazy combinations I'm trying to do - it did not

Ok, my very last attempt - clone this repository, figure out how to build kustomize from main brach and start putting console logs here and there to figure out what goes wrong and where

And the funny fact here is that after all that journey all that experiments, console logs, manual build - everything just worked as expected out of the box without the need to go over seven rounds of hell

At the very end all needs to be done is to define namespace in patch file as well and it did worked out

The reason: localy I had (devel) version (which if I understand is close enough for current 5.1.0) but my kubectl is shipped with 5.0.1 🤷‍♂️

I am imagining how much easier it will be if in docs and readme there will be big notice for noobs like me to take attention about possibility of issues like I had, so if something not working the very first thing we should check if it works with kustomize itself rather than kubectl

Why is this needed?

I bed I am not a first one and not a last one who struggled like this, hopefully will save someones time in future

Can you accomplish the motivating task without this feature, and if so, how?

Create and issue like this, having as detailed as possible error message, so anyone can find it in future

What other solutions have you considered?

Commenting of semi-similar issues in this repo, but most of them are not exactly related

Anything else we should know?

Not asking for huge change, simple noticeable notice in docs should be enough, probably it is faster to add it rather than creating this issue

Feature ownership

  • [X] I am interested in contributing this feature myself! 🎉

marchenko1985 avatar Jul 23 '23 09:07 marchenko1985

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 23 '23 09:07 k8s-ci-robot

Your second example doesn't build with any recent version of Kustomize. Given your sample kustomization.yaml and increase_replicas.yml, we can run the following test with multiple versions of kustomize...

#!/bin/sh

versions='
5.1.0
5.0.3
4.5.7
4.4.1
'

for version in $versions; do
	curl -LsSf -o kustomize.tar.gz https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv${version}/kustomize_v${version}_linux_amd64.tar.gz
	tar xf kustomize.tar.gz kustomize
	echo "Kustomize version: $(./kustomize version)"
	if ./kustomize build > out-${version}.yaml 2> err-${version}; then
		echo SUCCESS
	else
		echo FAIL: $(cat err-${version})
	fi
	echo
done

And see:

Kustomize version: v5.1.0
FAIL: Error: no matches for Id Deployment.v1.apps/cert-manager.[noNs]; failed to find unique target for patch Deployment.v1.apps/cert-manager.[noNs]

Kustomize version: v5.0.3
FAIL: Error: no matches for Id Deployment.v1.apps/cert-manager.[noNs]; failed to find unique target for patch Deployment.v1.apps/cert-manager.[noNs]

Kustomize version: {Version:kustomize/v4.5.7 GitCommit:56d82a8378dfc8dc3b3b1085e5a6e67b82966bd7 BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64}
FAIL: Error: no matches for Id Deployment.v1.apps/cert-manager.[noNs]; failed to find unique target for patch Deployment.v1.apps/cert-manager.[noNs]

Kustomize version: {Version:kustomize/v4.4.1 GitCommit:b2d65ddc98e09187a8e38adc27c30bab078c1dbf BuildDate:2021-11-11T23:36:27Z GoOs:linux GoArch:amd64}
FAIL: Error: no matches for Id apps_v1_Deployment|~X|cert-manager; failed to find unique target for patch apps_v1_Deployment|cert-manager

If I add the missing metadata.namespace setting to your patch, then it works with all those versions of kustomize.

larsks avatar Jul 24 '23 20:07 larsks

@larsks yes and no

from one side you are definitely correct and that's exactly the missing point in docs - whenever something does not work - do check it with kustomize alone rather than kustomize shipped with kubectl

here is reproduction script (please take a note about kubectl binary link, it should be changed depending on your system)

#!/usr/bin/env bash

# just in case, to start from scratch each time
rm -rf issue5248 || true
mkdir issue5248

# install wanted kubectl version
# note: I'm running this on apply m1, change link for linux to:
# https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubectl
wget -q https://dl.k8s.io/release/v1.27.4/bin/darwin/arm64/kubectl -O issue5248/kubectl
chmod +x issue5248/kubectl

# print versions
./issue5248/kubectl version --client --short 2>/dev/null
# Client Version: v1.27.4
# Kustomize Version: v5.0.1
echo ''


# sample manifests

echo 'apiVersion: apps/v1
kind: Deployment
metadata:
  name: cert-manager
spec:
  replicas: 100500' > issue5248/increase_replicas.yml

echo 'resources:
- https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
patches:
  - path: increase_replicas.yml' > issue5248/kustomization.yml

echo 'TEST1: will fail with error complaining that it can not find unique target'
if ./issue5248/kubectl apply -k issue5248 --dry-run=client -o yaml | grep -q 100500
then
    echo "passed"
else
    echo "failed"
fi

echo ''
echo ''
echo 'modifying increase_replicas.yml by specifying namespace'
echo ''
echo 'apiVersion: apps/v1
kind: Deployment
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  replicas: 100500' > issue5248/increase_replicas.yml


echo 'TEST2: error gone, BUT, resource was not patched'
if ./issue5248/kubectl apply -k issue5248 --dry-run=client -o yaml | grep -q 100500
then
    echo "passed"
else
    echo "failed"
fi

and here is the output

Client Version: v1.27.4
Kustomize Version: v5.0.1

TEST1: will fail with error complaining that it can not find unique target
error: no matches for Id Deployment.v1.apps/cert-manager.[noNs]; failed to find unique target for patch Deployment.v1.apps/cert-manager.[noNs]
failed


modifying increase_replicas.yml by specifying namespace

TEST2: error gone, BUT, resource was not patched
failed

hopefully with this example motivation will become clear

marchenko1985 avatar Jul 26 '23 06:07 marchenko1985

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 25 '24 06:01 k8s-triage-robot

Hm, there is a chance, by mistake, I have created an issue in a wrong place 🤷‍♂️

I have created small PR to docs repository, with small note describing this

Will see if it will be approved or not, and right after, will close this issue, indeed it does not worse attention of maintainers and is cosmetic

marchenko1985 avatar Jan 25 '24 07:01 marchenko1985

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 24 '24 08:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 25 '24 08:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 25 '24 08:03 k8s-ci-robot

I am affected as well.

HI am trying to use kustomize to patch a Deployment, but it keeps saying that there is no resource matching that :

error: no matches for Id Deployment.v1.apps/ingress-nginx-controller.nginx-rocket-integration; failed to find unique target for patch Deployment.v1.apps/ingress-nginx-controller.nginx-rocket-integration

I have following base:

apiVersion: kustomize.config.k8s.io/v1beta1
resources:
  - https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

And following overlay:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: nginx-rocket-integration
resources:
  - ../../base
  - ingressClass.yaml
patchesStrategicMerge:
  - patchDeploymentClass.yaml

ingressClass.yaml:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: integration
spec:
  controller: k8s.io/integration

patchDeploymentClass.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-nginx-controller
  namespace: nginx-rocket-integration
spec:
  replicas: 2

If you remove the patch and kustomize it, you will see that the deployment indeed does exist. What could be the issue here?

thecooldrop avatar Apr 04 '24 20:04 thecooldrop