cluster-api-provider-gcp
cluster-api-provider-gcp copied to clipboard
Implement Conditions
/kind feature
Describe the solution you'd like
Add Conditions to GCPCluster.Status and GCPMachine.Status
CAPI ecosystem implemented Conditions to show the Cluster status at glance. It will be great to add conditions to GCPCluster and GCPMachine to help users to understand the status of each object better.
Tasks
- https://github.com/kubernetes-sigs/cluster-api-provider-gcp/issues/577
- https://github.com/kubernetes-sigs/cluster-api-provider-gcp/issues/576
These tasks can be done in parallel. Feel free to pick one or the other.
Related docs
- k8s API conventions: https://github.com/kubernetes/community/blob/a2cdce5/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties
- CAEP: https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20200506-conditions.md
Anything else you would like to add: Other providers have implemented conditions so take a look at CAPA or CAPZ for reference.
Hi, @pydctw I would like to work on this. I did some research using some of the links you provided and looking at CAPZ and here is what I understand so far:
A few new types would need to be introduced like CAPZ does over here: https://github.com/kubernetes-sigs/cluster-api-provider-azure/blob/82f169f652ecf26dee9f789b17bbd86776a02f88/api/v1beta1/conditions_consts.go#L23
The current GCPmachine status spec would need to be updated to include clusterv1.Conditions field
Here's what I currently don't understand
-
The exact types of conditions is there some kind of documentation for the possible state of a gcp machine ?
-
How I'd implement this in the reconciliation loop of the GCP machine controller? ( I suppose I should take a look at CAPA )
I'm probably missing a lot more but I hope this is sufficient information to get started
Hi @s1ntaxe770r, thanks for your interest in the issue. Very happy to hear that.
You are on the correct path - providers usually create a file called conditions_consts.go to define the condition types and clusterv1.Conditions field needs to be added to GCPmachine status.
The exact types of conditions is there some kind of documentation for the possible state of a gcp machine ?
Each providers have slightly different conditions. For GCP, take a look at what's returned from GCP instance get call
- https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/main/cloud/services/compute/instances/reconcile.go#L130
- https://pkg.go.dev/google.golang.org/[email protected]/compute/v1#Instance - Instance has a Status field
How I'd implement this in the reconciliation loop of the GCP machine controller? ( I suppose I should take a look at CAPA )
Yes, reading other providers code will definitely give you a good idea as infraMachine controllers have similar structure.
I would suggest reading awsmachine_controller and investigate conditions.
Hey, @pydctw thanks for pointing me in the right direction. I've looked through the links and have a few more questions
-
I noticed CAPA and CAPZ have a concept of "Reasons" example over here , would this be necessary for GCP machines?
-
Is there any benefit in patching the status of the machine as seen here vs Fetching the status of the machine by calling
instances.get() -
Is this a good time to split this issue into two parts? I would like to start with GCPmachine first
Created two sub-issues. Let's continue the discussion for GCPMachine conditions in https://github.com/kubernetes-sigs/cluster-api-provider-gcp/issues/576
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.