cluster-api
cluster-api copied to clipboard
Holistic nil checking in public functions
Fuzzing through OSS-Fuzz has shown a number of previously unchecked nil-pointer errors in Cluster API code e.g.:
- https://github.com/kubernetes-sigs/cluster-api/pull/6334, https://github.com/kubernetes-sigs/cluster-api/pull/7040,
There's likely to be a few other places where consumers of Cluster API's public functions could unknowingly cause a nil-pointer if not checking for nil in all parameters (and receivers potentially) before calling the function.
In most cases this is a non-issue as the nil conditions may not be possible outside of artificial cases like fuzzing, but there may be some edge-case crashes that could be prevented by improving handling of nils in public functions.
This could be resolved in a holistic way by going through CAPI public functions and checking if each parameter is nil. A helper function could be used to do this in some cases, but the expected behaviour of nil may be different depending on the function and parameter type e.g. options variadics should not return an error if nil.
If a holistic fix isn't an option there should be documentation/communication to consumers of Cluster API as a library that nil checking should be done before calling functions.
This issue was previously discussed here: https://github.com/kubernetes-sigs/cluster-api/pull/6334#issuecomment-1084602280
/kind cleanup
@killianmuldoon: This issue is currently awaiting triage.
If CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@killianmuldoon is there a way to break down this work in small actionable chuck, so we can start tackling this progressively, and possibly spread the work across many contributors?
I'll update this issue with instructions on how to do this across packages, starting with a small group of packages that are regularly used outside CAPI.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/triage accepted
/assign
(At least to follow up and create a more detailed task breakdown)
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
Still got this on my to-do list :slightly_frowning_face:
/reopen
@killianmuldoon: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/close
I've not gotten to this, and it hasn't picked up much traction.
@killianmuldoon: Closing this issue.
In response to this:
/close
I've not gotten to this, and it hasn't picked up much traction.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.