cluster-api-provider-ibmcloud
cluster-api-provider-ibmcloud copied to clipboard
EPIC - Introduce new API Specs and infra creation flow
/kind feature /area provider/ibmcloud
Describe the solution you'd like This epic covers all the task items involved as part of the process to complete the entire PowerVS infra creation via CAPIBM.
Completed
- [x] #1485
- [x] #1488
- [x] #1592
- [x] #1605
- [x] #1599
- [x] #1600
TO-DO
- [x] #1451
- [x] #1607
- [x] #1647
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
/assign @Karthik-K-N @Amulyam24 @Prajyot-Parab
Some of the review comments which needs to be addressed
- [x] Have proper resource state handling and avoid returning error PR: https://github.com/kubernetes-sigs/cluster-api-provider-ibmcloud/pull/1693
- [ ] Revisit logs messages
- [ ] reconcileNetwork - reconcile DHCP server and network separately
- [ ] Check about how to test in staging environment as machine types are different
- [ ] Fix gocyclo measure and avoid //nollint
- [ ] Reconcile VPC - separate the security group from it and handle it outside
- [ ] Check on how to use and handle exisitng TG and connections
- [ ] Revisit when the IBMPowerVSMachine should be set as ready after updating Machine IP in LB and all operations are done or before
- [ ] Rewrite LB pool memeber addition -- revisit the logic, it looks complicated
- [ ] Optmise checking and setting status for resources
- [ ] Have defined types for recurring errors
- [ ] Validate a case when invalid ID is passed while getting a resource — check how API behaves
- [ ] optimise LB pool member creation
- [ ] Add a check before importing or using IBMPowerVSImage and execute only if IBMPowerVSCluster is ready
- [ ] How to handle if machine has multiple internal IPs
- [ ] Revisit adding/setting/updating listeners in LB creation
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale