some deployment spec options are not applied from ConfigMap.data.deployment
Is this a BUG REPORT or FEATURE REQUEST?: BUG
What happened: I configured my deployments by setting in the ConfigMap:
data:
deployment: '{"spec": {"template": {"spec": {"enableServiceLinks": false}}}}'
But it was not reflected in a Function's deployment and it defaults to true.
What you expected to happen: enableServiceLinks: false is set on my function deployment.
How to reproduce it (as minimally and precisely as possible): see 'What happened'
Anything else we need to know?: other settings like dnsPolicy were applied.
Environment:
- Kubernetes version (use
kubectl version):
➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"archive", BuildDate:"2020-11-15T17:26:49Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.9", GitCommit:"4fb7ed12476d57b8437ada90b4f93b17ffaeed99", GitTreeState:"clean", BuildDate:"2020-07-15T16:10:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
- Kubeless version (use
kubeless version): manifest of1.0.7 - Cloud provider or physical cluster: cloud
The root cause is that the go.mod uses a very old k8s api version: k8s.io/api v0.0.0-20180308224125-73d903622b73 which is two and a half years old, while currently the latest release is v0.19.4 according to https://pkg.go.dev/k8s.io/api, released roughly two weeks ago.
Also I would propose to not silently omit if someone tries to configure something which is not understood by kubeless, but log a warning at least. I could not find anything in the function controller logs regarding this issue with the default deployment manifests (info level).
The root cause is that the go.mod uses a very old k8s api version:
k8s.io/api v0.0.0-20180308224125-73d903622b73which is two and a half years old, while currently the latest release isv0.19.4according to https://pkg.go.dev/k8s.io/api, released roughly two weeks ago.
I agree, that's the cause of the issue. Maybe you can send a PR updating that version? I am not sure if something else would break since there has been several breaking changes in the API spec.
Also I would propose to not silently omit if someone tries to configure something which is not understood by kubeless, but log a warning at least. I could not find anything in the function controller logs regarding this issue with the default deployment manifests (info level).
The problem is that the unmarshalling of the content don't fail when there are extra properties. This is the code:
err := yaml.Unmarshal([]byte(deploymentConfigData), &deployment)
if err != nil {
logrus.Errorf("Error parsing Deployment data in ConfigMap kubeless-function-deployment-config: %v", err)
return err
}
So AFAIK there is not much we can do there unless there is some method I'm not aware of.
Sure, I can address this via a PR, although I think the breaking changes should be explored and explained in the release docs, so I try to come up with a summary in that regard too.
The golang JSON decoder can do this: https://golang.org/pkg/encoding/json/#Decoder.DisallowUnknownFields, so it should not be impossible. According to the yaml.Unmarshal code it is definitely possible: https://github.com/ghodss/yaml/blob/master/yaml.go#L51
// Unmarshal converts YAML to JSON then uses JSON to unmarshal into an object,
// optionally configuring the behavior of the JSON unmarshal.
func Unmarshal(y []byte, o interface{}, opts ...JSONOpt) error {
return unmarshal(yaml.Unmarshal, y, o, opts)
}
// UnmarshalStrict is like Unmarshal except that any mapping keys that are
// duplicates will result in an error.
// To also be strict about unknown fields, add the DisallowUnknownFields option.
func UnmarshalStrict(y []byte, o interface{}, opts ...JSONOpt) error {
return unmarshal(yaml.UnmarshalStrict, y, o, opts)
}
The k8s API update is quite a complex thing. All the triggers need updates too.
@andresmgot I succeeded with the k8s API update, but because the triggers and kubeless are in a circular dependency, I am not sure how can I submit PRs for this. Any advice? What is the reason behind splitting the triggers code from the main code base?
@andresmgot I succeeded with the k8s API update, but because the triggers and kubeless are in a circular dependency, I am not sure how can I submit PRs for this. Any advice? What is the reason behind splitting the triggers code from the main code base?
Yes, the problem is that this repo requires the triggers to generate the command line utils and the triggers need this repo to get core functionalities. The reasoning behind this is to be able to develop triggers separately (so it's not needed to release a new version of kubeless to release a new trigger for example). In theory a hard-coded old version of the triggers should work (or some replacement in go.mod) but I would need to double check that. I will take a look when I have the chance.
If I understand the codebase correctly two binaries are built from the kubeless repository: the function-controller and the cli. Would it make sense to extract the cli from the Kubeless repo to lets say kubeless-cli to break the circular dependency?
yes, as far as I can remember that's the only place where the trigger's code is used so extracting the cli would help with that (but I haven't checked the code so I'm not 100% sure).
Thanks, I will give it a shot then.