qbec
qbec copied to clipboard
Expose datasource variables __DS_PATH__ and __DS_NAME__ to exec command configuration
Example for kinda universal helm-template datasource:
qbec.yaml:
vars:
computed:
- name: helm-template
code: |
{
local p = import './params.libsonnet').components[
std.extVar('qbec.io/__DS_NAME__')
],
command: 'helm',
args: ['template', ('https://' + std.extVar('qbec.io/__DS_PATH__')), '-n', p.namespace, '-f-'],
stdin: std.manifestJson(p.values)
}
dataSources:
- exec://victoriametrics/helm-template
components/victoriametrics.jsonnet:
local vm = importstr 'data://victoriametrics/github.com/VictoriaMetrics/helm-charts/raw/347d4558d9c25cd341718bf5a2ee167da042c080/packages/victoria-metrics-cluster-0.9.6.tgz';
if std.objectHas(params, 'enabled') && params.enabled then std.native('parseYaml')(vm)
environments/test.jsonnet:
...
victoriametrics+: {
enabled: true,
namespace: 'victoriametrics',
values: {
vmstorage: {
retentionPeriod: 4, // 4 months
persistentVolume: {
size: '32Gi',
},
},
},
},
Unfortunately computed variables are evaluated at startup and added to the bag of variables given to the jsonnet VM. It's not possible with the current design to have a variable that is created at the point of data source import.
that said, what you are proposing is a neat idea - let me think about a mechanism to implement it.
the way I was thinking this would work is that we would implement a first-class helm3 data source rather than shoe-horning it into the exec data source.
also pretty sure that you cannot import a binary file like a TGZ file using importstr - ultimately this is a json string and needs to be valid UTF8
also pretty sure that you cannot import a binary file like a TGZ file using importstr - ultimately this is a json string and needs to be valid UTF8
This is just a path which is a parameter for helm. Helm renders template and output it as yaml.
Right now I use similar approach, but with hardcoded values instead of std.extVar('qbec.io/__DS_NAME__') and it works fine.
+1 for this, nice example @Andor 👍
To me it would be very convenient way to pass parameters via data url. Eg. something like this:
local p = import '../../params.libsonnet';
local params = p.components.victoriametrics;
local chart = 'https://github.com/VictoriaMetrics/helm-charts/raw/347d4558d9c25cd341718bf5a2ee167da042c080/packages/victoria-metrics-cluster-0.9.6.tgz';
parap
renderedChart = importstr 'data://helm/template?name=victoriametrics&chart=' + std.base64(chart) + '&values=' + std.base64(params.values);
std.native('parseYaml')(renderedChart)
But it depends on https://github.com/google/jsonnet/issues/196
if imports could be computed all this data source business that qbec has wouldn't exist at all :) These importers were written specifically because computed imports don't work.
The data source design gets around the inability to have a computed import by doing a level of indirection such that import 'fixed-path' has a name that refers to a config variable that can be set up dynamically at startup. And the "computed vars" feature in qbec.yaml is syntax sugar for easily setting these things up. You could just as well pass in pre-computed code vars on the command line.
But that's the key distinction - at startup. Once the initial computation is done, everything is still deterministic as jsonnet requires it to be.
What we are essentially trying to do here is get around the at startup limitation which simply cannot be done without forking how jsonnet works.
I have some ideas on how to implement this but I need a weekend to try them out :) Well, not exactly this proposal but something almost as convenient.
thinking out loud, very rough ideas...
Currently we have one exec data source that is defined as exec://<name>?configVar=<var-name> where the contents of configVar tells the system the command to run with precomputed args, stdin and env. The command also lazily gets 2 environment variables for data source name and path when the data source is actually imported.
We could introduce a helm3 data source that is set up as helm3://helm?configVar=<some-var> (where the configuration object configures some basics about how to run helm - e.g. version to use, whether to use a local tgz or a helm registry etc. but importantly, not chart paths or values).
Then the import could be written as
import 'data://helm/path/to/chart?values-from=<var-name>'
where <var-name> is the name of another variable that should be used to expand the values for the chart. This is what makes the system workable because the import path is static and the values are pre-computed at startup perhaps using a computed var.
importstr 'data://....' but yeah the approach is hacky but should work OK.
importstr 'data://....'but yeah the approach is hacky but should work OK.
well, actually since it is the helm3 data source implementation that runs helm under the covers etc. there is no reason why it cannot convert the output of helm YAML into JSON and return that, thereby making import work by returning an array of valid JSON objects.
Sound reasonable, I agree.
This seems like a good time to evaluate if we want to venture into the plugin space. Define a standard plugin interface and helm could be one implementation
This seems like a good time to evaluate if we want to venture into the plugin space. Define a standard plugin interface and helm could be one implementation
You mean out of process plugins, right? I'd rather implement a couple like helm3 and possibly vault in-tree and then figure out if a plugin model is needed, can be supported and so on... It's a big investment of time that we are currently in shortage of :(
Helm3 data source with examples and no docs or tests :)
https://github.com/splunk/qbec/pull/265