Prow web UI exposes private repositories
The plugin catalog and the command help pages expose private repositories in the repository dropdown. For example, both show the repository etcd-io/etcd-ghsa-j8g6-82f3-cvhp, which is private.
I tried tracing the issue, and it seems to come from (client) GetRepos(...), as it gets all the repositories accessible to the user/organization without filtering (however, I may be wrong that this is the root cause):
https://github.com/kubernetes-sigs/prow/blob/79d27b6e3be35974fbe103d3f574d70dfea6f03c/pkg/github/client.go#L2529-L2560
This is obviously not expected behavior, but this doesn't seem harmful to me. You can see the repositories in the dropdown, but there is no data there when selected. Essentially, we are leaking names of private repositories. Is there a reason you can think of where this is a serious issue?
FWIW, if this is considered a problem, I'd highly recommend using a distinct instance.
prow.k8s.io used to have private repositories for some sensitive embargoed security-patch related work, but that's not the approach now, and the Kubernetes project is primarily focused on open repos.
It's super likely that there will be other gaps and I would encourage using an isolated deployment for anything so sensitive that the names of the repos are considered an info leak.
I don't have a local Prow deployment. However, I noticed this information leak while checking Prow's Web UI. I don't know if other Kubernetes organizations make use of private repositories. I noticed etcd's because I'm a contributor.
Feel free to close if you feel this is irrelevant or the risk is minor :)
We should probably consider this a bug anyhow, I just didn't want anyone getting the wrong idea about how secure this is.
That includes etcd, we should discuss privately in slack with the other K8s Infra / Testing leads about the requirements for etcd and private repos.
/kind bug /sig testing
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.