community
community copied to clipboard
Add Additional Print Columns to DB Instance and DB Cluster CRDs
@jaypipes here's what I've got so far working locally with the DBInstance CRD:
$ oc -n my-namespace get dbinstance
NAME READY STATUS ENGINE ENGINE-VER CLASS
my-db-instance True available oracle-ee 19.0.0.0.ru-2022-01.rur-2022-01.r1 db.t3.medium
Here's the yaml insert for the v1alpha1 spec.version:
additionalPrinterColumns:
- description: The state of the custom resource
name: READY
#priority: 0
jsonPath: .status.conditions[?(@.type=="ACK.ResourceSynced")].status
type: string
- description: The AWS status of the custom resource
jsonPath: .status.dbInstanceStatus
name: STATUS
#priority: 0
type: string
- description: The AWS engine of the custom resource
jsonPath: .spec.engine
name: ENGINE
#priority: 0
type: string
- description: The AWS engine version of the custom resource
jsonPath: .spec.engineVersion
name: ENGINE-VER
#priority: 0
type: string
- description: The AWS instance class of the custom resource
jsonPath: .spec.dbInstanceClass
name: CLASS
#priority: 0
type: string
I'll need to get the ack generator working so I can test the syntax for the fields block and then I will pull. This look OK to you?
@urton Yeah, that looks slick to me, thank you! @RedbackThomson @vijtrip2 what do you think?
Looks good to me as well! Taking feedback from Bruce(SDE, RDS team) will be helpful too.
I couldn't help but notice some other fields which could be helpful to include as additional columns too,
- AllocatedStorage
- AvailabilityZone
- DBClusterIdentifier
- DBInstanceIdentifier
@jaypipes sounds good. @vijtrip2 I actually meant to add AllocatedStorage. Thanks for the reminder. Should I make those 4 additional fields priority 1?
jsonPath: .status.conditions[?(@.type=="ACK.ResourceSynced")].status
Interesting, I've never seen the json path selectors used inside additional printer columns. This could be a good default addition to all of our resources ???
One nit I have is that we should use the full name for ENGINE-VERSION
, rather than just ENGINE-VER
.
One other naive suggestion, based on the what the console shows in condensed form, Endpoint
?
@RedbackThomson thanks for the feedback. I actually got the idea for the READY column from the Strimzi operator which we use extensively. All the CRs in that project have the READY column printed out along with other common configs.
$ oc -n my-namespace get dbinstances.rds.services.k8s.aws
NAME READY STATUS ENGINE ENGINE-VERSION CLASS STORAGE
my-db-instance True available oracle-ee 19.0.0.0.ru-2022-01.rur-2022-01.r1 db.t3.medium 105
$ oc -n my-namespace get dbinstances.rds.services.k8s.aws -o wide
NAME READY STATUS ENGINE ENGINE-VERSION CLASS STORAGE AZ IDENTIFIER ENDPOINT
my-db-instance True available oracle-ee 19.0.0.0.ru-2022-01.rur-2022-01.r1 db.t3.medium 105 us-west-1b my-db-instance my-db-instance.xxxxxx.us-west-1.rds.amazonaws.com
Note: My DBInstance CR name matches my AWS DB instance identifier in this example
That's pretty cool and would be very useful to get a quick big picture! One small question is suppose instance is doing some updating(like changing engine version/instance class etc). will this show the last known state or showing maybe waiting/unsynced ?
One small question is suppose instance is doing some updating(like changing engine version/instance class etc). will this show the last known state or showing maybe waiting/unsynced ?
It's going to show whatever is found in status.Status
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/close
@ack-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Provide feedback via https://github.com/aws-controllers-k8s/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.