cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
Transform `Profile` into an interface
/kind feature
Right now, when creating a Neutron port with specific binding capabilities (e.g. switchdev for OVS Hardware Offload), the syntax would be:
profile:
capabilities: '[''switchdev'']'
Because our interface is: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/ae4101718f0ab5e949e74dd692b55e83b55213e9/api/v1beta1/types.go#L134
While in Gophercloud, it's an interface:
Profile map[string]interface{} `json:"binding:profile,omitempty"`
https://github.com/gophercloud/gophercloud/blob/4885c347dcf48fc518798d37fb93ffa7bdaa43e9/openstack/networking/v2/extensions/portsbinding/requests.go#L23
Providing the list of capabilities via a string isn't clean.
Describe the solution you'd like
Convert Profile to Profile map[string]interface{}, so we'll be able to do:
profile:
capabilities:
- switchdev
Note: this feature is difficult to test against a real OpenStack cloud, for example switchdev required specific hardware (Mellanox Connect-X) for OVS HW offload. When implementing this change, we should also add functional testing with a basic profile that would work with current devstack setup.
The problem here is that 'interface' is only the Go binding. What we're really defining here is the API, and that requires a concrete type.
I think if we want to give the members of the profile map more user friendly types we're going to have to define them in the api. Perhaps that no bad thing? Profile is basically just being used to proxy an OpenStack API right now, which isn't great. Perhaps we should just define which parts of it we want to support and put them in a struct?
@EmilienM the second occurance of this need to be updated :) I got a little bit confused at beginning
Profile map[string]string `json:"profile,omitempty"` ==> string to interface
I agree with @mdbooth add interface into API might not be a good thing , so if we can make them struct as we know what's support from CAPI side, we should do that..
I think map[string]interface won't even work / break with kubebuilder and generating the deepcopy stuff.
If we change it to a typed struct, e.g.:
type PortOpts struct {
...
Profile *PortProfile `json:"profile,omitempty"`
...
}
...
type PortProfile struct {
Capabilities []string `json:"capabilities,omitempty"`
}
We would have a breaking change because we don't know who else did set something here which was not the capabilities key.
I'm fine with that but we have to keep that in mind and have it marked as a breaking change. Adding other fields may be easy afterwards.
Is there some kind of list of what the Profile supports as keys?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@mnaser: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@jichenjc: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@mnaser I assume you plan to work on this :) , so I reopened it , let me know I made wrong assumption..
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I think we should fix this before v1beta1.
/remove-lifecycle rotten
@EmilienM Looking at this now because I want to clean it up before declaring our API v1beta1. Just reading the API description:
A dictionary that enables the application running on the specific host to pass and receive vif port information specific to the networking back-end. This field is only meant for machine-machine communication for compute services like Nova, Ironic or Zun to pass information to a Neutron back-end. It should not be used by multiple services concurrently or by cloud end users. The existing counterexamples (capabilities: [switchdev] for Open vSwitch hardware offload and trusted=true for Trusted Virtual Functions) are due to be cleaned up. The networking API does not define a specific format of this field. The default is an empty dictionary. If you update it with null then it is treated like {} in the response. Since the port-mac-address-override extension the device_mac_address field of the binding:profile can be used to provide the MAC address of the physical device a direct-physical port is being bound to. If provided, then the mac_address field of the port resource will be updated to the MAC from the active binding.
This is saying:
- Users (i.e. CAPO) should not be using this API at all
- ...except for switchdev and trusted=true, which will also go away
Given that it doesn't look like the developers want us to expose this feature, how about we instead add explicit support for the 2 'sanctioned' use cases. We could add these as explicit fields:
port:
ovsHWOffload: true
trustedVF: true
or perhaps a list with only certain values allowed:
port:
profile:
- ovsHWOffload
- trustedVF
In either case this would result in the correct binding profile being generated, but it would not be possible to use the API in other ways. Thoughts?
@JoelSpeed also interested in API guidance if you have any.
I would go with:
port:
ovsHWOffload: true
trustedVF: true
@EmilienM Would it still meet all reasonable use cases for this API?
@EmilienM Would it still meet all reasonable use cases for this API?
I believe so.
/assign EmilienM