kustomize
kustomize copied to clipboard
hack/install_kustomize.sh fails intermittently
Describe the bug
It seems like a change in how GitHub API text is returned is causing issues with the bash script hack/install_kustomize.sh. Instead of pretty printing the JSON output, it is returned as a single line. As a result, the incorrect download link is selected ( there is only a single line for grep to search, it returns it as true as the pattern is in that line, and cut selects the wrong URL for download).
Platform
Tested on Ubuntu 20.04.4 LTS with curl 7.68.0
Additional context
Some output from curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -xs -- 3.8.7 ~/bin (taken from operator SDK Makefile)
+ set -e
+ unset CDPATH
+ where=/home/bewing
+ release_url=https://api.github.com/repos/kubernetes-sigs/kustomize/releases
+ '[' -n 3.8.7 ']'
+ [[ 3.8.7 =~ ^[0-9]+(\.[0-9]+){2}$ ]]
+ version=v3.8.7
+ release_url=https://api.github.com/repos/kubernetes-sigs/kustomize/releases/tags/kustomize%2Fv3.8.7
<output omitted>
++ curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases/tags/kustomize%2Fv3.8.7
+ releases='{"url":"https://api.github.com/repos/kubernetes-sigs/kustomize/releases/33829673","assets_url":"https://api.github.com/repos/kubernetes-sigs/kustomize/releases/33829673/assets","upload_url":"https://uploads.github.com/repos/kubernetes-sigs/kustomize/releases/33829673/assets{?name,label}","html_url":"https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize/v3.8.7","id":33829673,"author":{"login":"monopole","id":2928188,"node_id":"MDQ6VXNlcjI5MjgxODg=","avatar_url":"https://avatars.githubusercontent.com/u/2928188?v=4","gravatar_id":"","url":"https://api.github.com/users/monopole","html_url":"https://github.com/monopole","followers_url":"https://api.github.com/users/monopole/followers","following_url":"https://api.github.com/users/monopole/following{/other_user}","gists_url":"https://api.github.com/users/monopole/gists{/gist_id}","starred_url":"https://api.github.com/users/monopole/starred{/owner}{/repo}","subscriptions_url":"https://api.github.com/users/monopole/subscriptions","organizations_url":"https://api.github.com/users/monopole/orgs","repos_url":"https://api.github.com/users/monopole/repos","events_url":"https://api.github.com/users/monopole/events{/privacy}", <long line truncated>
<long output again removed>
++ cut -d '"' -f 4
++ sort -V
++ tail -n 1
+ RELEASE_URL=https://api.github.com/repos/kubernetes-sigs/kustomize/releases/33829673
+ '[' '!' -n https://api.github.com/repos/kubernetes-sigs/kustomize/releases/33829673 ']'
+ curl -sLO https://api.github.com/repos/kubernetes-sigs/kustomize/releases/33829673
+ tar xzf './kustomize_v*_linux_amd64.tar.gz'
tar (child): ./kustomize_v*_linux_amd64.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
If I download install_kustomize.sh and modify the script to pass the output from the release_url through | jq . to pretty-print it, it works as expected.
@bewing: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
A bunch of us tried to reproduce this during today's bug scrub and we were not able to. Can you please confirm whether you are still experiencing this issue?
/triage needs-information
I am still experiencing this issue.
Here is a gist with the full output from executing the script with -x set: https://gist.github.com/bewing/62f50954a6a8cb9e6ca1e3c6a100fcb7
The machine in question is colocated with ServerCentral in Chicago, IL
As additional information: This is non-deterministic. Once, it did manage to download/install the binary successfully. I think Github or its cache might be doing some A/B testing of removing newlines from API output?
Curious if in your testing performing curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases generates the output as one extremely long line, or as multi-line?
$ curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases | wc -l
0
edit:
again, this is non-deterministic. I tried again just now and got the multi-line response:
$ curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases | wc -l
3572
If newlines are not present, adding jq into the invocation can restore them, and make this work properly:
# non-working
curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases | grep browser_download.*linux_amd64 | cut -d '"' -f 4 | sort -V | tail -n 1
https://api.github.com/repos/kubernetes-sigs/kustomize/releases/73452880
# working
$ curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases | jq | grep browser_download.*linux_amd64 | cut -d '"' -f 4 | sort -V | tail -n 1
https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v4.5.7/kustomize_v4.5.7_linux_amd64.tar.gz
I'm also seeing this problem only recently. It definitely worked a few days ago. In my case it's on macos.
+ tar xzf './kustomize_v*_darwin_arm64.tar.gz'
tar: Error opening archive: Failed to open './kustomize_v*_darwin_arm64.tar.gz'
The pipe to jq workaround works for me.
So should we modify the script to use jq? What's the current status here?
So should we modify the script to use jq? What's the current status here?
I'm not sure. How can you ensure that jq is present? It looks like the current script tries very hard to use only common binaries (curl, cut, grep). Adding in a new dependency might break a lot of things downstream (operator-sdk, for example) that attempt to use this script o install kustomize for build steps if not present in the path
https://github.com/operator-framework/operator-sdk/blob/7ff900717f9179665dea414ac54ccdedaef2b4fe/internal/plugins/ansible/v1/scaffolds/internal/templates/makefile.go#L127-L139
That's a good point. Maybe finding a solution without adding an additional dependency would be the best way forward.
Maybe json_pp is common enough to use?
curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases | json_pp -json_opt pretty,canonical | grep browser_download.*linux_amd64 | cut -d '"' -f 4 | sort -V | tail -n 1
Indeed, we want to avoid introducing extra dependencies to this script. Although I'd also prefer to avoid adding complexity, if we have no better option, we could check if the most commonly available tool is present, and continue attempting to use the output raw if it isn't (perhaps with a warning message).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle-stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle-rotten
Ping @bewing
Haven't thought about this in a bit. Might the best approach be to:
- Download
https://api.github.com/repos/kubernetes-sigs/kustomize/releases, store it as a variable, and test its line length - If line length > 1, use existing grep against the string
- If line length < 1, use a new grep designed to extract the release URL, using
grep --only-matching(is this considered portable enough?)
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
/remove-lifecycle rotten
@bewing: Reopened this issue.
In response to this:
/reopen
/remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
/remove-lifecycle rotten