rules_k8s icon indicating copy to clipboard operation
rules_k8s copied to clipboard

Should .apply target exist when context / cluster is not defined?

Open mariusgrigoriu opened this issue 6 years ago • 8 comments

Now that we can pass in arbitrary args thanks to #224 and #225, it seems a bit odd that you need to specify a cluster or context argument in the BUILD file in order to have access to apply / delete targets. In my use case, I'm calling the rule from an external script and the context, cluster, or user may be different each time. Seems like passing in the cluster with --define=cluster= is overkill and maybe undesirable as it invalidates the run files cache when the cluster names change. Seems much easier to just pass in those args like this: bazel run :k8s.apply -- --cluster= --user=

Thoughts?

mariusgrigoriu avatar Mar 01 '19 02:03 mariusgrigoriu

@chrislovecnm @nlopezgi any thoughts?

xingao267 avatar Mar 01 '19 21:03 xingao267

I think it does make sense to make cluster/context optional, particularly to avoid invalidating the cache. However, we would instead have to add some checks to the templates for apply/delete which verify that if the cluster/context was not set via the rule, they are passed as args, otherwise error out with a decent message. @chrislovecnm wdyt? @mariusgrigoriu would you be willing to contribute a PR that makes these attrs optional and creates the required validations in the templates (+adding tests and doc changes)? I'd be happy to help / guide you in this process!

nlopezgi avatar Mar 04 '19 20:03 nlopezgi

Glad to do a PR.

In the case when kubectl and config file are supplied externally, would any validation make sense? Would it also be valid to have your context setup in advance so no args would be required?

mariusgrigoriu avatar Mar 05 '19 07:03 mariusgrigoriu

hmm, I guess the validation would not be necessary if the context setup is done in advance. Do you know if there is any way we can check if it is properly setup if either args / attrs are not set and if not error out nicely (as opposed to just erroring out when it tries to run and it turns out context was not setup)

nlopezgi avatar Mar 06 '19 20:03 nlopezgi

I could see how validation might be useful in a purely hermetic mode where kubectl and the config do not come from outside the sandbox so you might want to make sure the config is good before building or fetching kubectl.

When using system kubectl, we expect that clusters, contexts, and credentials are all pre-configured. I'm not sure that validation provides much value in this case because it's quick to execute kubectl and get an error message straight from the source.

mariusgrigoriu avatar Mar 07 '19 05:03 mariusgrigoriu

A common build design within Google is to separate out the binary + job description (command line flags, storage mappings, service setups) from the configuration that changes from cluster to cluster (eg. ipv6 vs ipv4, cluster name, replica count). Not only is the binary packaged up, but it is often good to couple that binary with the configuration that it is known to work with (command line flags from the json/yaml). This package can then be invoked for different target clusters with a known set of values that are used for last-minute stamping. This allows for quicker scaling/turnups than recompliling from source. Routing some of this more dynamic information seems to be supported by the resolver, but the cluster information should be runtime rather than compile time. Since changing the cluster/context invalidates the cache it should be kept out of the build process. This would include dynamic KUBECONFIG files that may change. I can see this validation being correct if the config was built by some rule, which seems to be supported.

My use case is that I do development on a github project on different environments, each with a different way that a local kubernetes stack is spun up. Currently I have to update the cluster and context strings, which get squashed with pulls. Another option is to have a boolean flag declaring that the invoker has set the KUBECONFIG environment variable, and that ensuring a valid cluster/context is no longer needed. Otherwise the build rules will expect context/cluster to be set.

borg286 avatar Jun 19 '19 10:06 borg286

Could validation be simply that before invoking kubectl we check if the KUBECONFIG is a non-empty environment variable or --config or --context flags are set in $@ ? If we go in the other direction I'd like a way to define a set of clusters to operate on. Currently it seems that each cluster would get its own set of build targets for each k8s_object and k8s_objects you want to run, which is way too verbose.

borg286 avatar Jun 19 '19 10:06 borg286

That seems reasonable. I'm unlikely to be able to get to a PR for the next week or so. If someone has a more urgent need feel free to get one in.

mariusgrigoriu avatar Jun 19 '19 18:06 mariusgrigoriu