hierarchical-namespaces
hierarchical-namespaces copied to clipboard
SubnamespaceAnchor status should indicate errors reconciling labels/annotations
I did a manual test of the new feature introduced in https://github.com/kubernetes-sigs/hierarchical-namespaces/pull/149, and even if this feature is marked as beta, I think the user feedback must be improved.
Given a default configuration of HNC, which does not allow any managed labels/annotations, I created a SubnamespaceAnchor
. Initially I did not try to manage any namespace labels nor annotations:
apiVersion: hnc.x-k8s.io/v1alpha2
kind: SubnamespaceAnchor
metadata:
name: child
namespace: parent
The child namespace is created successfully.
Then I modify the SubnamespaceAnchor
, trying to add one label and one annotation to the child namespace:
apiVersion: hnc.x-k8s.io/v1alpha2
kind: SubnamespaceAnchor
metadata:
name: child
namespace: parent
spec:
annotations:
- key: annot-key
value: annot-value
labels:
- key: label-key
value: label-value
The resource is updated successfully, and the status indicates a successful reconcile:
status:
status: Ok
But no label or annotation is added to the child namespace. This is expected, but I think the SubnamespaceAnchor
status should indicate that HNC was unable to reconcile the spec. The log in the HNC controller pod reports the error, but this log is not available to the user (just admins).
Note: Trying to create the SubnamespaceAnchor
with the label/annotation is correctly denied by the webhook. So this is an update problem.
Marking as v1.1 but I'm in favour of backporting any fix to v1.0.1.
/good-first-issue
@adrianludwin: This request has been marked as suitable for new contributors.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue
command.
In response to this:
Marking as v1.1 but I'm in favour of backporting any fix to v1.0.1.
/good-first-issue
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Hey, I was going to give this issue as go as a first-issue but I'm unable to reproduce.
Once I modify the SubnamespaceAnchor
I get:
The SubnamespaceAnchor "child" is invalid:
* spec.labels: Invalid value: "label-key": not a managed label and cannot be configured
* spec.annotations: Invalid value: "annot-key": not a managed annotation and cannot be configured
Looks like it's been fixed by #168, so I'm thinking this issue can be closed?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.