kubespray
kubespray copied to clipboard
IPv6 Tracker
I open a general issue for this. It's driving me mad having to fix any other open source project, because they don't care about current standards. It's a huge bug from my point of view and very unprofessional behavior not to care about standards that exist for decades. IPv6 exist for 2 decades now. It's not a very new standard. It is even older than Kubernetes itself. Legacy IP is even deprecated by the IETF. See IETF: End Work on IPv4. IPv6 is on its way to becoming the majority on the client side. IPv6 first, legacy IP is only optional.
See: https://www.akamai.com/internet-station/cyber-attacks/state-of-the-internet-report/ipv6-adoption-visualization https://www.google.de/ipv6/statistics.html
Related bugs/PRs (pretty sure this will become more):
- [ ] #8962
- [x] #8946
Todo: Check for other additional IPv6 related issues.
@Citrullin kubespray is extremely understaffed, so ranting will not get anything done. Development is driven by individual contributors needs, and none of the "unprofessional" contributors needed IPv6 only yet it seems ... Your approach is "nobody should be using IPv4 anymore anyway so it's not a problem if we break it", guess what, few users of kubespray are using IPv6 so IPv4 is still really important. So instead of using your energy to complain and piss off maintainers, spend it making every use cases work at the same time.
@Citrullin kubespray is extremely understaffed, so ranting will not get anything done. Development is driven by individual contributors needs, and none of the "unprofessional" contributors needed IPv6 only yet it seems ... Your approach is "nobody should be using IPv4 anymore anyway so it's not a problem if we break it", guess what, few users of kubespray are using IPv6 so IPv4 is still really important. So instead of using your energy to complain and piss off maintainers, spend it making every use cases work at the same time.
I thought this project is funded by the cloud foundation and therefore tons of big cooperations with a deep enough pocket. Thought it's even part of it. At least all the signing, readme etc. suggests it. So, it kind of surprises me you are understaffed. I don't want to break legacy IP. From my perspective and according to the IETF it is just an optional feature. It's fine to have it, but IPv6 is the current IP version. It's just frustrating to see this always only in this industry. Other industries care deeply about standardization and follow it. Tech is pretty much the only industry that treats standards in this way. That frustrates me. That's all. Anyway. It was too much, I see that. Sorry for the rage. I take a look and try to fix all of these IPv6 related issues. Just keep it as a tracker and link all related issues.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.