eventing icon indicating copy to clipboard operation
eventing copied to clipboard

Make dynamically created resources configurable

Open lionelvillard opened this issue 4 years ago • 15 comments

Problem Most of the resources created dynamically are not configurable. For instance see how the PingSource receive adapter is created.

Persona: Which persona is this feature for? Operator

Exit Criteria A measurable (binary) test that would indicate that the problem has been resolved. Receive adapter deployment or ksvc with extensions

Time Estimate (optional): How many developer-days do you think this may take to resolve? 2-3

Additional context (optional)

The current solution relies on both DeepDerivative and an external way to dynamically patch resources. That's less than ideal. Not all controllers are using DeepDerivative (is this a bug?) and not all fields can be updated dynamically. Also what is "the external way"? We certainly need something more self-contained.

Another solution is to do what the alt Kafka channel implementation is doing: using env vars (and soon ConfigMap) to specify the configuration parameters. The major (IMO) drawback of this solution is 1) it is not generic 2) there is always the risk of forgetting something. For instance there is no way to add affinity rules in there without modifying the code.

A third solution is specify the expected deployment or ksvc in a config map, similar to what we already do with the default channel template.

For instance config-eventing.yaml could have the following configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: config-eventing
  namespace: knative-eventing
data:
  mt-pingsource-adapter: |
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: pingsource-mt-adapter 
    spec:
      template:
        spec:
          containers:
            - name: dispatcher
              resources:
                limits:
                  cpu: "1"
                  memory: 2Gi
                requests:
                  cpu: 125m
                memory: 64Mi

The eventing webhook must first validate the specified fields that are under its control (e.g. apiVersion, kind, name, namespace, spec.template.spec.containers[0].name). If image is specified, the webhook should reject the configmap.

The eventing controller would then augment the resource with the fields it "owns", like serviceAccountName, image, and so on.

The exact behavior is yet to be defined. I'm also planning to create a feature track document.

Comments? @cr22rc @n3wscott @eric-sap @travis-minke-sap @vaikas @grantr

lionelvillard avatar Aug 19 '20 19:08 lionelvillard

/lgtm

cr22rc avatar Aug 19 '20 19:08 cr22rc

From slack:

@antoineco I wonder why not directly in the CR then. I'm curious whether the motivation was to separate the application's config from the infrastructure requirements. MongoDB exposes both in the same CR, I did that for some larger application controlled by an operator too. It doesn't mean I believe it's a good design, I haven't spent much time thinking about it. (edited)

@lionelvillard the reason is it exposes the underlying implementation (in the mongo case) and personally I don’t want that.

lionelvillard avatar Aug 20 '20 13:08 lionelvillard

Yeah, that's an interesting idea which had never occurred to us. I understand how the WebHook would limit the Deployment config to only K8S runtime settings, but have a few questions...

  • Would you expect environment variables to be included in the Deployment as well, or would we continue to specify them on the controller as env vars (or in our new ConfigMap) and applied by the controller?

  • What about K8S settings like Liveness/Readiness probes which require knowledge of the image implementation? Would the WebHook block them as well?

  • How would the user/operator know what was fair game to configure, without resorting to trial/error games with the WebHook? Would we include everything that was configurable in the default ConfigMap/Deployment or document it or what? Would that set of things change from resource to resource or be fixed across Knative?

I think we'd still have our ConfigMap for the Sarama Config as it's not K8S Deployment related, and is used at runtime to create Kafka Producers/ConsumerGroups. Only mentioning to make sure we are on the same page as to scope of your solution. Also, we don't currently have a WebHook for eventing-kafka but that's just due to not having gotten around to it ;)

I definitely see how this is more flexible/configurable than the pass-through we have today. A few more moving parts, but they're well defined and easy enough to understand.

travis-minke-sap avatar Aug 21 '20 14:08 travis-minke-sap

Thanks @travis-minke-sap for looking at this proposal!

Would you expect environment variables to be included in the Deployment as well, or would we continue to specify them on the controller as env vars (or in our new ConfigMap) and applied by the controller?

It depends. Dynamically populated env vars (such as K_LOGGING_CONFIG) would still be populated by the controller. The deployment includes env vars overriding the default values set by the controller. Only the env vars known by the controller should be allowed.

What about K8S settings like Liveness/Readiness probes which require knowledge of the image implementation? Would the WebHook block them as well?

I think so. That's part of The eventing webhook must first validate the specified fields that are under its control.

How would the user/operator know what was fair game to configure, without resorting to trial/error games with the WebHook? Would we include everything that was configurable in the default ConfigMap/Deployment or document it or what?

I would add documentation in the ConfigMap (similar to the _example section) and also in the Knative docs

Would that set of things change from resource to resource or be fixed across Knative?

Initially this allows for configuration at the resource level.

I think we'd still have our ConfigMap for the Sarama Config

Absolutely!

lionelvillard avatar Aug 21 '20 16:08 lionelvillard

This should probably have a feature track since it applies generally

grantr avatar Aug 24 '20 20:08 grantr

#4292 describes the same problem with a different solution (annotations on the source object). Still need a feature track for this. @matzew can you collaborate with @lionelvillard on this issue? Thanks!

grantr avatar Oct 26 '20 17:10 grantr

I hope this issue covers passing of annotations on spec template onto pod level for all types of annotations customs or known.

iamejboy avatar Dec 25 '20 10:12 iamejboy

I see the point of leaking information (see: https://github.com/knative/eventing/issues/4292#issuecomment-707234966)

@lionelvillard what should we do with this?

matzew avatar Jan 05 '21 16:01 matzew

I'm fine leaking implementation details in a ConfigMap.

I still want to explore the idea around server-side apply managed fields. More on this hopefully soon.

lionelvillard avatar Jan 05 '21 16:01 lionelvillard

I wonder, given we're slowly moving to mt sources, where the user is the one that owns the data plane deployments (so she/he has the responsibility to them maintain up and running and configure them properly), does it really makes sense to invest time on this? Isn't there a risk here to end up reimplementing pod spec, or better "proxying" pod spec, making a whole more complex the source reconcilers?

slinkydeveloper avatar Jan 12 '21 16:01 slinkydeveloper

where the user is the one that owns the data plane deployments

Which user? The data plane deployment is still managed by controllers, even for MT sources.

lionelvillard avatar Jan 12 '21 17:01 lionelvillard

@lionelvillard really? Then my bad, ignore the previous comment :smile:

slinkydeveloper avatar Jan 12 '21 17:01 slinkydeveloper

Design doc draft: https://docs.google.com/document/d/1VvlgoWP1_enKIaH9LBbGkLvwWUluLjPdtPezULCmzfE/edit?resourcekey=0-MFetW356DuH58TntTv3CjQ#

lionelvillard avatar Jan 12 '21 21:01 lionelvillard

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

github-actions[bot] avatar Apr 13 '21 01:04 github-actions[bot]

/remove-lifecycle stale

lionelvillard avatar Jan 06 '22 15:01 lionelvillard