+kubebuilder:default={} does not use nested defaults
So I believe I exposed a bug in kubebuilder...
when I define a +kubebuilder:default={} as a pointer to another type with +kubebuilder:default values I assume those values should propagate to the parent object.
For example
// policyAuditConfig is the configuration for network policy audit events. If unset,
// reported defaults are used.
// +kubebuilder:default={}
// +optional
PolicyAuditConfig *PolicyAuditConfig `json:"policyAuditConfig,omitempty"`
Where the PolicyAuditConfig is defined in the same package
type PolicyAuditConfig struct {
// rateLimit is the approximate maximum number of messages to generate per-second per-node. If
// unset the default of 20 msg/sec is used.
// +kubebuilder:default=20
// +kubebuilder:validation:Minimum=1
// +optional
RateLimit *uint32 `json:"rateLimit,omitempty"`
// maxFilesSize is the max size an ACL_audit log file is allowed to reach before rotation occurs
// Units are in MB and the Default is 50MB
// +kubebuilder:default=50
// +kubebuilder:validation:Minimum=1
// +optional
MaxFileSize *uint32 `json:"maxFileSize,omitempty"`
// destination is the location for policy log messages.
// Regardless of this config, persistent logs will always be dumped to the host
// at /var/log/ovn/ however
// Additionally syslog output may be configured as follows.
// Valid values are:
// - "libc" -> to use the libc syslog() function of the host node's journdald process
// - "udp:host:port" -> for sending syslog over UDP
// - "unix:file" -> for using the UNIX domain socket directly
// - "null" -> to discard all messages logged to syslog
// The default is "null"
// +kubebuilder:default=null
// +kubebuilder:pattern='^libc$|^null$|^udp:(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]):([0-9]){0,5}$|^unix:(\/[^\/ ]*)+([^\/\s])$'
// +optional
Destination string `json:"destination,omitempty"`
// syslogFacility the RFC5424 facility for generated messages, e.g. "kern". Default is "local0"
// +kubebuilder:default=local0
// +kubebuilder:Enum=kern;user;mail;daemon;auth;syslog;lpr;news;uucp;clock;ftp;ntp;audit;alert;clock2;local0;local1;local2;local3;local4;local5;local6;local7
// +optional
SyslogFacility string `json:"syslogFacility,omitempty"`
}
However rather than rendering the default correctly I see the following in the generated yaml
policyAuditConfig:
description: policyAuditConfig is the configuration for network policy audit events. If unset, reported defaults are used.
type: object
default: null
properties:
destination:
description: 'destination is the location for policy log messages. Regardless of this config, persistent logs will always be dumped to the host at /var/log/ovn/ however Additionally syslog output may be configured as follows. Valid values are: - "libc" -> to use the libc syslog() function of the host node''s journdald process - "udp:host:port" -> for sending syslog over UDP - "unix:file" -> for using the UNIX domain socket directly - "null" -> to discard all messages logged to syslog The default is "null"'
type: string
default: "null"
maxFileSize:
description: maxFilesSize is the max size an ACL_audit log file is allowed to reach before rotation occurs Units are in MB and the Default is 50MB
type: integer
format: int32
default: 50
minimum: 1
rateLimit:
description: rateLimit is the approximate maximum number of messages to generate per-second per-node. If unset the default of 20 msg/sec is used.
type: integer
format: int32
default: 20
minimum: 1
syslogFacility:
description: syslogFacility the RFC5424 facility for generated messages, e.g. "kern". Default is "local0"
type: string
default: local0
Specifically the default value is null where it should be populated automatically by the defined nested defaults.
I am happy to help propose a fix here, however after spending some time playing with the codebase I'm realizing I will probably need some assistance from someone more experienced with kubebuilder
I already reached out on slack and got no response, which prompted me to open this issue.
Thanks, Andrew
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@astoycos: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/bug
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@therealak12: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@astoycos: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@astoycos: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This issue seems not fixed?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This is still an issue. What "works" as a sort of workaround is specifying the default verbatim in every +kubebuilder:default marker, for instance:
type XStrategy struct {
// +kubebuilder:validation:Optional
// +kubebuilder:default={"plan":{"deployment":"ScaleDownOnPromotionScaleUpOnRollout"}}
Resources RolloutResources `json:"resources,omitempty"`
}
type RolloutResources struct {
// +kubebuilder:validation:Optional
// +kubebuilder:default={"deployment":"ScaleDownOnPromotionScaleUpOnRollout"}
Plan RolloutResourcePlans `json:"plan,omitempty"`
}
type RolloutResourcePlans struct {
// +kubebuilder:validation:Enum=ScaleDownOnPromotionScaleUpOnRollout;DeleteOnPromotionRecreateOnRollout
// +kubebuilder:default=ScaleDownOnPromotionScaleUpOnRollout
Deployment RolloutResourcePlanDeployment `json:"deployment,omitempty"`
}
which is error prone and counterproductive. Field based defaults should be enough.
/reopen
@pmalek: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
This is still an issue. What "works" as a sort of workaround is specifying the default verbatim in every
+kubebuilder:defaultmarker, for instance:type XStrategy struct { // +kubebuilder:validation:Optional // +kubebuilder:default={"plan":{"deployment":"ScaleDownOnPromotionScaleUpOnRollout"}} Resources RolloutResources `json:"resources,omitempty"` } type RolloutResources struct { // +kubebuilder:validation:Optional // +kubebuilder:default={"deployment":"ScaleDownOnPromotionScaleUpOnRollout"} Plan RolloutResourcePlans `json:"plan,omitempty"` } type RolloutResourcePlans struct { // +kubebuilder:validation:Enum=ScaleDownOnPromotionScaleUpOnRollout;DeleteOnPromotionRecreateOnRollout // +kubebuilder:default=ScaleDownOnPromotionScaleUpOnRollout Deployment RolloutResourcePlanDeployment `json:"deployment,omitempty"` }which is error prone and counterproductive. Field based defaults should be enough.
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@astoycos: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
This issue has been here for over two years now 😵 Does anyone plan to work on it?
I just experienced the same behavior. I've used https://github.com/kubernetes-sigs/controller-tools/issues/622#issuecomment-1690128862 as a workaround.
/lifecycle frozen /kind bug