operator-sdk
operator-sdk copied to clipboard
Go CSV Markers to populate specDescriptors for "owned" CRDs from another module
Feature Request
A way to generate specDescriptors and statusDescriptors to CSV from CRD defined in _types.go in external module (ie. different go.mod) using markers
Describe the problem you need a feature to resolve.
We oadp-operator deploy CRDs that are not from the same Go module github.com/vmware-tanzu/velero/. Our go module is github.com/openshift/oadp-operator
This means we are not able to add operator-sdk Go tags to the _types.go since the github.com/vmware-tanzu/velero/ project is not operator-sdk specific.
I have an issue when ...
make bundle does not have a way to populate specDescriptors into CSV from out of module CRD. Defining specDescriptors manually also gets overwritten if I'm not mistaken.
Usecase
So we can specify which x-descriptors applies to each field to customize the UI for a field in externally and non operator-sdk specific defined _types.go
https://github.com/openshift/console/blob/master/frontend/packages/operator-lifecycle-manager/src/components/descriptors/reference/reference.md
Describe the solution you'd like.
A way to store define/generate specDescriptors into CSV dynamically from out of module _types.go or CRD. Perhaps by defining imported_types.go like following
// file: imported_types.go
package v1alpha1
import velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
// To generate CSV this CRD
// spec:
// apiservicedefinitions: {}
// customresourcedefinitions:
// owned:
// - description: Backup is a Velero resource that respresents the capture of Kubernetes
// cluster state at a point in time (API objects and associated volume state).
// displayName: Backup
// kind: Backup
// name: backups.velero.io
// statusDescriptors:
// - description: CompletionTimestamp records the time a backup was completed.
// Completion time is recorded even on failed backups. Completion time is recorded
// before uploading the backup object. The server's time is used for CompletionTimestamps.
// displayName: CompletionTimestamp
// path: completionTimestamp
//+operator-sdk:csv:customresourcedefinitions:externalModule=true,displayName="Backup",kind=Backup,name=backupstoragelocations.velero.io
// Backup is a Velero resource that respresents the capture of Kubernetes cluster state at a point in time (API objects and associated volume state).
type importVeleroBackup velerov1.Backup
//+operator-sdk:csv:customresourcedefinitions:externalModule=true,displayName="CompletionTimestamp",path=backupstoragelocations.velero.io/completionTimestamp
// CompletionTimestamp records the time a backup was completed.
// Completion time is recorded even on failed backups. Completion time is recorded
// before uploading the backup object. The server's time is used for CompletionTimestamps.
type CompletionTimestamp *metav1.Time `json:"completionTimestamp,omitempty"`
/language go
Defining specDescriptors manually also gets overwritten if I'm not mistaken.
At a minimum, if this is the case, I would consider it a bug. Would you mind building a reproducer and filing a bug issue for that?
I can amend to testdata dir perhaps
Defining specDescriptors manually also gets overwritten if I'm not mistaken
@kaovilai Are you modifying the descriptions inside the bundle itself, or inside the config/ dir (where CRD bases are present). The bundle gets overwritten, unless it is explicitly set to false through the make bundle command.
The issue with having markers on importVeleroBackup type, is that the descriptions would not be propagated to any specific type fields which velerov1.Backup would have, as there is no way for operator-sdk to know it. Additionally, the process of wrapping the Backup API with importValeroBackup seems confusing - looks like you are trying to use the same definition, but define a separate API for your operator.
Inside config without getting overwritten by manifests generation
as there is no way for operator-sdk to know it.
currently. I think you can make it possible for operator-sdk to get that info from somewhere.
In addition to a completely out of module CRDs, our structs also inherit some fields imported from other modules..
Would be great to be able to set x-descriptors for otherField.nestedField.nestedField in below example.
import "github.com/someotherthing/apis/v1"
struct MyOwnKind type {
ourField string
otherField v1.something
}
I am proposing new flags to indicate external module
//+operator-sdk:csv:customresourcedefinitions:externalModule=true
You can also add flag for creating relationship between them to link them together.
currently. I think you can make it possible for operator-sdk to get that info from somewhere.
How would operator-sdk know which field on that struct the marker is applied to? markers in go generally speaking have to be on the field that is to be marked, not some other random/nested field.
Since the dependent module is open source, can't the descriptors be added there? or in a fork? That seems like the most logical way to go.
Since the dependent module is open source
ok even if we were able to add markers to x_types.go in other module, I doubt that the markers would have any effect on the CSV of this module since the only thing we're doing is copying over the CRD, not the .go files.
since the only thing we're doing is copying over the CRD, not the .go files.
To be clear, you do not reference the dependent operator in your go.mod file? In your code sample above, it seemed that you had a code dependency on the dependent project, which I would think would be the ideal way to go.
We have a code dependency in our struct and is in go.mod, worse case we can do anonymous import but I wonder for the cases where some other operator is a workload installer and installs CRD created from project outside the operator module.
We have a code dependency in our struct and is in go.mod, worse case we can do anonymous import but I wonder for the cases where some other operator is a workload installer and installs CRD created from project outside the operator module.
This seems like a really strange use-case for an operator to me. OLM's job is to install operators and manage the CRD life-cycle, it seems less then ideal to have an operator install another operator and manage the dependent CRD's. OLM provides a dependency.yaml for this very purpose.
an operator install another workload that did not design with operator-sdk or OLM in mind.
This is our operator repo https://github.com/openshift/oadp-operator
Our slack https://redhat.enterprise.slack.com/archives/C0144ECKUJ0
Workload we have maintainers and and fork to openshift (Not an operator):
- Upstream: https://github.com/vmware-tanzu/velero/
- Upstream openshift fork: https://github.com/openshift/velero
We rebase often against upstream. Are you suggesting that in our fork we change module name etc and annotate _types.go?
We want to be "upstream first" and this upstream specifically do not incorporate "openshift specific quirks" including defaulting to operator-framework for operator packaging and discovery very often.
Sounds like you're suggesting to create minimal velero operator that oadp-operator can then depend on.
It's a very hackish approach as the repo upstream is not in a kubebuilder/v3 structure and will probably involve copying CRDs around into config/ etc.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale