client-go
client-go copied to clipboard
Unable to set TaintApplyConfiguration in NodeApplyConfiguration
Unable to set TaintApplyConfiguration in NodeApplyConfiguration
Modules
- k8s.io/api
v0.23.5
- k8s.io/client-go
v0.23.5
- k8s.io/apimachinery
v0.23.5
Problem
I have been attempting to apply node taints via setting TaintApplyConfiguration
in NodeApplyConfiguration
, but I have been running into memory issues. See a minimal example below:
import (
"context"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
configv1 "k8s.io/client-go/applyconfigurations/core/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func ApplyTaint() *corev1.Node {
config, _ := rest.InClusterConfig()
clientset, _ := kubernetes.NewForConfig(config)
node, _ := clientset.CoreV1().Nodes().Get(context.Background(), "node-name", metav1.GetOptions{})
node_config, _ := configv1.ExtractNode(node, "node_field_mgr")
effect := corev1.TaintEffect("NoSchedule")
time_added := metav1.Now()
taint := configv1.Taint().WithKey("test").WithValue("test").WithEffect(effect).WithTimeAdded(time_added)
node_config.Spec.Taints = []configv1.TaintApplyConfiguration{*taint}
node, _ = clientset.CoreV1().Nodes().Apply(context.Background(), node_config, metav1.ApplyOptions{FieldManager: "node_field_mgr"})
return node
}
The line node_config.Spec.Taints = ...
results in the following:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x138fa3b]
I get the same result trying to set configv1.NodeSpecApplyConfiguration
and configv1.TaintApplyConfiguration
using the With...()
methods seen below. The error occurs upon setting the taints in Spec
.
new_spec := node_config.Spec.WithTaints(taint)
new_config := node_config.WithSpec(new_spec)
node, _ = clientset.CoreV1().Nodes().Apply(context.Background(), new_config, metav1.ApplyOptions{FieldManager: "node_field_mgr"})
Suspected issue
I suspect the issue is to do with TimeAdded
. Looking at the struct definition for configv1.TaintApplyConfiguration
a pointer is expected for TimeAdded
:
type TaintApplyConfiguration struct {
Key *string `json:"key,omitempty"`
Value *string `json:"value,omitempty"`
Effect *v1.TaintEffect `json:"effect,omitempty"`
TimeAdded *metav1.Time `json:"timeAdded,omitempty"`
}
Looking at the taint value using fmt.Println(*taint)
shows the metav1.Time
pointer to not be a memory address, but a value:
{0xc000bc4c50 0xc000bc4c10 0xc000bc4bf0 2022-07-15 10:16:31.2528043 +0000 UTC m=+0.080599401}
It seems pointers to the metav1.Time
type store the value in the pointer rather than a memory address. This can be confirmed by:
fmt.Println(time_added)
fmt.Println(&time_added)
> 2022-07-15 10:16:31.2528043 +0000 UTC m=+0.080599401
> 2022-07-15 10:16:31.2528043 +0000 UTC m=+0.080599401
The second output line would be expected to be a memory address, not the value itself. I suspect there is some mis-handling of this odd behaviour for the data type.
Setting TimeAdded
to nil
result in a similar issue with the memory address being null. Additional testing was done using *metav1.Time.DeepCopyInto(taint.TimeAdded)
with no further success.
Expected behaviour
Solutions may look like one of the following:
- allow for non-pointer value of
TimeAdded
inTaintApplyConfiguration
- allow
*metav1.Time
to hold a valid memory address - minimally, allow for
TimeAdded
to be omitted
Current workaround
I am able to update node taints using corev1.Node
and setting Node.Spec.Taints
.
func UpdateTaint() *corev1.Node {
config, _ := rest.InClusterConfig()
clientset, _ := kubernetes.NewForConfig(config)
effect := corev1.TaintEffect("NoSchedule")
time_added := metav1.Now()
taint := corev1.Taint{
Key: "test",
Value: "test",
Effect: effect,
TimeAdded: &time_added,
}
node, _ := clientset.CoreV1().Nodes().Get(context.Background(), "node-name", metav1.GetOptions{})
node.Spec.Taints = []corev1.Taint{taint}
node, _ = clientset.CoreV1().Nodes().Update(context.Background(), node, metav1.UpdateOptions{FieldManager: "node_field_mgr"})
return node
}
Confirmed with kubectl
:
kubectl get nodes -o json | jq '.items[].spec.taints'
[
{
"effect": "NoSchedule",
"key": "test",
"timeAdded": "2022-07-15T11:25:28Z",
"value": "test"
}
]
Interestingly TimeAdded
is still a pointer in corev1.Node.Spec.Taints
, but it seems there is some better handling of the pointer in this module.
Note: this workaround is not ideal as it uses Update
rather than Apply
meaning all changes will be updated, rather than only the difference.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.