Add support for fetching files from private GCS bucket.
- Service account authentication is available when running on GCE.
- If we are accessing a google cloud storage bucket, we now fetch an auth token and apply it to any requests made.
- This includes the initial request to download nodeup, subsequent requests made by nodeup to download additional files.
- It also rewrites requests made as part of preflight checking (e.g. checking the existence of the nodeup binary) if those requests are GCS requests.
Room for improvement: * We could support rewrites in the other direction as well - allow the user to specify the gs:// scheme url and rewrite it to storage.googleapis.com when fetching nodeup. * This would be useful generically, and could be extended to include authenticated requests to S3 buckets.
Hi @nat-henderson. Thanks for your PR.
I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To complete the pull request process, please assign zetaab after the PR has been reviewed.
You can assign the PR to them by writing /assign @zetaab in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/ok-to-test /cc @justinsb
Okay ... sorry for what may be a ton of PR noise here, but the same commands are passing locally, so there must be something about the execution environment I need to sort out, which means I gotta upload and re-run tests at least a few times...
Okay, that looks like a timeout, so I'm just going to try it again.
/test pull-kops-e2e-kubernetes-aws
Hm, same problem:
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. 2414
Suppose I'll leave that there, then - @justinsb Let me know if that looks like it reflects a real error caused by my PR during your review?
I really like the idea of tweaking the bootstrap script to support metadata-based authentication (not least because it isn't that hard, though I'm a little scared about the AWS v2 exchange!)
Looking at the nodeup logic that maps https://storage.googleapis.com back to gs://, I'm wondering whether we can avoid the gs:// -> https://storage.googleapis.com translation in the first place and whether that might be easier. I might try an experiment :-)
I did a little experiment, and it looks (as you suggested) we need some small changes e.g. to the download from https logic, but that it might be easier to accept a KOPS_BASE_URL that is gs://, and only rewrite NodeUpSourceAmd64 / NodeUpSourceArm64. The advantage of this is that we keep the vfs layer a little "purer" - it doesn't "magically" switch from https://storage.googleapis.com to gs:// . My scratch-pad (based on your changes) is here: https://github.com/justinsb/kops/tree/authenticated_gcs_download
@nat-henderson: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Nope, I'm still here! If you can come back to this PR I do still want to get it in. :)
@nat-henderson: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-kops-e2e-cni-weave | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-weave |
| pull-kops-e2e-cni-amazonvpc | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-amazonvpc |
| pull-kops-e2e-cni-kuberouter | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-kuberouter |
| pull-kops-e2e-cni-calico | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-calico |
| pull-kops-e2e-cni-calico-ipv6 | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-calico-ipv6 |
| pull-kops-e2e-cni-cilium | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-cilium |
| pull-kops-e2e-cni-flannel | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-flannel |
| pull-kops-e2e-k8s-gce-cilium | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-k8s-gce-cilium |
| pull-kops-e2e-aws-karpenter | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-aws-karpenter |
| pull-kops-e2e-k8s-aws-calico | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-k8s-aws-calico |
| pull-kops-build | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-build |
| pull-kops-test | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-test |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@nat-henderson: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-kops-e2e-cni-weave | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-weave |
| pull-kops-e2e-cni-amazonvpc | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-amazonvpc |
| pull-kops-e2e-cni-kuberouter | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-kuberouter |
| pull-kops-e2e-cni-calico | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-calico |
| pull-kops-e2e-cni-calico-ipv6 | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-calico-ipv6 |
| pull-kops-e2e-cni-cilium | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-cilium |
| pull-kops-e2e-cni-flannel | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-flannel |
| pull-kops-e2e-k8s-gce-cilium | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-k8s-gce-cilium |
| pull-kops-e2e-aws-karpenter | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-aws-karpenter |
| pull-kops-e2e-k8s-aws-calico | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-k8s-aws-calico |
| pull-kops-build | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-build |
| pull-kops-test | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-test |
| pull-kops-e2e-cni-cilium-eni | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-cilium-eni |
| pull-kops-e2e-cni-cilium-etcd | 169bdd7e98c8f645a98a3ad4a77213db1592f113 | link | true | /test pull-kops-e2e-cni-cilium-etcd |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.