dns
dns copied to clipboard
unable(?) to upgrade dependencies and potential software supply chain hazard
Problem
kube-dns depends upon skydns, per go.mod
, and also strangely requires overriding almost all of the Kubernetes dependencies in go.mod to 1.19.
This is very strange to me, what is the point of declaring:
require (
# ...
k8s.io/api v0.21.1
# ...
)
replace (
// Needed to pin old version for skydns.
github.com/coreos/go-systemd => github.com/coreos/go-systemd v0.0.0-20180409111510-d1b7d058aa2a
k8s.io/api => k8s.io/api v0.19.12
Why not just declare that this package depends upon k8s.io/api v0.19.12? The replace
section is forcing dependencies onto unsupported versions of the Kubernetes API, the require
section just provides a false sense pretense that they're being updated to the current release.
Supply chain hazard
I was looking at making a PR to update kube-dns to monitor endpoint slices, to address #504, but this was immediately a blocker. Why are these pinned? What will break if they're unpinned? What will the engineering cost be if there is a vulnerability in these dependencies being pinned to old versions?
- github.com/skynetservices/skydns has no active maintainers and has not been updated since Oct 15, 2019.
- github.com/coreos/go-systemd is pinned to a version dated
v0.0.0-20180409...
- github.com/coredns/coredns was reverted to a ~2 year old version
- (edit, added upon attempting update) k8s.io/dns/pkg/dns imports ... google.golang.org/grpc/naming, but that package does not contain package "naming". Googling that issue suggests there is an open, 2 year old issue with etcd relying upon an outdated version of grpc.
Based on these pinned and archaic dependencies, it looks like it isn't possible to contribute and update to use the discovery.k8s.io/v1 API without much deeper domain knowledge in this project.
Solution
I am not sure, but I had already forked & started working on adding an informer to monitor endpointslices to address #504 and now I'm unsure what will break if I update k8s.io/client-go to use the discovery.k8s.io/v1 API.
thanks for opening this issue, Aaron.
Regarding the first concern about k8s 1.19.12 - I am upgrading that to 1.21.6 in https://github.com/kubernetes/dns/pull/503
Why do we need replace blocks for each k8s sub-repo - I am not sure about this. For example, in PR, if I remove the line:
k8s.io/api => k8s.io/api v0.21.6
I get the error:
go: k8s.io/[email protected] requires k8s.io/[email protected]: reading k8s.io/api/go.mod at revision v0.0.0: unknown revision v0.0.0
when running go mod vendor
.
I am not sure why this is not inferred automatically. Looks like https://stackoverflow.com/questions/59187781/revision-v0-0-0-unknown-for-go-get-k8s-io-kubernetes
I agree that the require and replace block should atleast point to the same thing, so they do not give the false impression of using a newer version. That is fixed in https://github.com/kubernetes/dns/pull/503
Regarding the other concerns:
github.com/skynetservices/skydns has no active maintainers and has not been updated since Oct 15, 2019. github.com/coreos/go-systemd is pinned to a version dated v0.0.0-20180409...
I agree, this is an issue. We have not had to make changes to the skydns repo since then, if we see the need to make a change, some options are 1) try to get on the maintainers list there 2) fork that repo to a different location and use that.
cc @bowei @mag-kol @cezarygerard
github.com/coredns/coredns was reverted to a ~2 year old version
This is being investigated in since it caused a performance issue. https://github.com/kubernetes/dns/issues/476
(edit, added upon attempting update) k8s.io/dns/pkg/dns imports ... google.golang.org/grpc/naming, but that package does not contain package "naming". Googling that issue suggests there is an open, 2 year old issue with etcd relying upon an outdated version of grpc.
k8s.io/dns/pkg/dns does not import grpc/naming AFAICT. git grep "grpc\/naming"
did not return anything within pkg/dns.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@dpasiukevich: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I imported skydns, upgarded its dependences, which allowed to unpin the dependencies and to upgrade to the latest versions. https://github.com/kubernetes/dns/pull/551
As for the import for k8s deps:
k8s.io/api => k8s.io/api v0.19.12
...
I've upgraded and synced the dependencies as well.
I see I only missed one case for the api k8s.io/api => k8s.io/api v0.24.7
but all other k8s imports are consistent between require
and replace
sections
I will sync api imports some time later.