controller-tools
controller-tools copied to clipboard
Error when using json.RawMessage - string vs object
Overview
Continuing from bug report #533 which was likely fixed and then resurfaced.
Using controller-gen v0.6.2
you get an error when using json.RawMessage
to represent unstructured fields.
(As a note, I would rather be able to use the map[string]interface{}
type, which would make this bug a non-issue for me, see #636 😉 )
Repro 1 - RawMessage has string type by default
// BackendSpec defines the desired state of Backend
type BackendSpec struct {
// +kubebuilder:pruning:PreserveUnknownFields
DeploymentTemplate json.RawMessage `json:"deploymentTemplate"`
}
Expected:
spec:
description: BackendSpec defines the desired state of Backend
properties:
deploymentTemplate:
description: The template for created deployments.
type: object
x-kubernetes-preserve-unknown-fields: true
Actual:
spec:
description: BackendSpec defines the desired state of Backend
properties:
deploymentTemplate:
description: The template for created deployments.
format: byte
type: string
x-kubernetes-preserve-unknown-fields: true
Repro 2 - Explicit validation:Type=object
causes error
// BackendSpec defines the desired state of Backend
type BackendSpec struct {
// +kubebuilder:pruning:PreserveUnknownFields
// +kubebuilder:validation:Type=object
DeploymentTemplate json.RawMessage `json:"deploymentTemplate"`
}
Expected:
spec:
description: BackendSpec defines the desired state of Backend
properties:
deploymentTemplate:
description: The template for created deployments.
type: object
x-kubernetes-preserve-unknown-fields: true
Actual (error):
api/v1alpha1:-: conflicting types in allOf branches in schema: string vs object
Versions
controller-gen v0.6.2
k8s.io/apimachinery v0.22.2
k8s.io/client-go v0.22.2
sigs.k8s.io/controller-runtime v0.10.2
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Adding // +kubebuilder:validation:Schemaless
worked for me:
// +kubebuilder:validation:Schemaless
// +kubebuilder:pruning:PreserveUnknownFields
// +kubebuilder:validation:Type=object
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
how to solve this? is the k8s version and controller version conflict?