operator-sdk
operator-sdk copied to clipboard
operator-sdk generate kustomize manifests fails with "no spec found for type"
Bug Report
What did you do?
I'm trying to move an operator to use operator-sdk and publish bundles.
What did you expect to see?
The bundle creation is failing however with an error message that says FATA[0021] Error generating kustomize files: error getting ClusterServiceVersion base: error generating ClusterServiceVersion definitions metadata: no spec found for type SecurityProfileNodeStatus
It should be noted that the type does not have a spec and a status, it's an object more akin to a configMap. See more details in the upstream object definition
Creating a dummy spec like this:
diff --git a/api/secprofnodestatus/v1alpha1/secprofnodestatus_types.go b/api/secprofnodestatus/v1alpha1/secprofnodestatus_types.go
index 99b91cd2..3fe45baa 100644
--- a/api/secprofnodestatus/v1alpha1/secprofnodestatus_types.go
+++ b/api/secprofnodestatus/v1alpha1/secprofnodestatus_types.go
@@ -84,10 +84,14 @@ type SecurityProfileNodeStatus struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
+ Spec SecurityProfileNodeStatusSpec `json:"spec,omitempty"`
+
NodeName string `json:"nodeName"`
Status ProfileState `json:"status,omitempty"`
}
+type SecurityProfileNodeStatusSpec struct{}
+
Seems to work around the issue.
What did you see instead? Under which circumstances?
I was expecting the manifests to be created.
Environment
Operator type:
/language go
Kubernetes cluster type:
$ operator-sdk version
operator-sdk version: "v1.17.0", commit: "704b02a9ba86e85f43edb1b20457859e9eedc6e6", kubernetes version: "1.21", go version: "go1.17.6", GOOS: "linux", GOARCH: "amd64"
$ go version
(if language is Go)
go version go1.17.6 linux/amd64
$ kubectl version
Client Version: 4.9.18
Server Version: 4.9.18
Kubernetes Version: v1.22.3+e790d7f
Possible Solution
Additional context
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
I ran into this as well. It seems that for objects that have a Status, it requires a Spec as well. I had other definitions that didn't trigger the error with no Spec, but they didn't have a Status.
When a Custom Resource does not have either Spec or Status sections, operator-sdk generate kustomize manifests
fails with the same error. What is the fix for this?
This is a problem for custom ComponentConfigs
, which upstream are defined without status
or spec
fields. It is possible to remove the entry from the PROJECT
file to get generation going again. But as upstream seems to not do this, I think in general not a good solution.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/unassign jmrodri
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
. Mark the issue as fresh by commenting/remove-lifecycle rotten
. Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.