API Server frontend LB port should not change backend port
/kind feature
The LB exposed port for the API Server should be configurable on its own without affecting the backend port the LB directs traffic to in the backend pool. For example, the LB for API Server is exposed on 443, but the backend pool is serving traffic on 6443.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] See https://github.com/kubernetes-sigs/cluster-api/issues/5517#issuecomment-954725794
prior work: https://github.com/kubernetes-sigs/cluster-api-provider-azure/pull/1207
we should also take a look at work done in the CAPA provider to use the KCP port: https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/3063
EDIT: I take that back, I don't think that PR is taking the right approach. Added a comment.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This seems like a use-case that others would run into (wanting to provide additional, optional metadata about an interaction). Wrapping context types to do this does indeed break the coupling between apps (which is based on specific intent/context type pairs) and adding the metadata into context types affects any that you use (including non-standard proprietary types) and does not seem practical.
Hence (as discussed off to the side), what I'd propose is that we extend ContextMetadata so this can contain some additional fields (we wrapped the only field it currently carries, AppIdentifier, in ContextMetadata for exactly this sort of future-proofing). We could then extend other functions with a new optional argument to include that metadata, e.g.
broadcast(context: Context): Promise<void>;
changes to:
broadcast(context: Context, sourceMetadata?: Context): Promise<void>;
raiseIntent(intent: string, context: Context, app?: AppIdentifier): Promise<IntentResolution>;
changes to:
raiseIntent(intent: string, context: Context, app?: AppIdentifier, sourceMetadata?: Context): Promise<IntentResolution>;
and ContextMetadata changes from:
interface ContextMetadata {
/** Identifier for the app instance that sent the context and/or intent.
* @experimental
*/
readonly source: AppIdentifier;
}
to
interface ContextMetadata {
/** Identifier for the app instance that sent the context and/or intent.
* @experimental
*/
readonly source: AppIdentifier;
/** Additional optional metadata that an app may choose to send with a broadcast context or raised intent.
* For example to provide source information from within the source application (e.g. from which chat room
* (in a chat app) or order (in an OMS) did the message originate from.
* @experimental
*/
readonly sourceMetadata: Context;
}
You could then pass in your ChatRoom context as the sourceMetadata.
ContextMetadata/Originating app metadata is currently an optional feature (there is an issue open to consider whether that should be required, see https://github.com/finos/FDC3/issues/735). But it looks like adding this wouldn’t break anything and could make it more useful… The additional §sourceMetadata§ would become available but the interaction is not dependent on it - making it optional.
However, it is an API change so it’d have to happen in an FDC3 2.1…
/remove-lifecycle stale
/help
@CecileRobertMichon: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I would like to help on this issue. /assign xiujuanx