`KubeConfigMerger` object poorly structured when `KUBECONFIG` is a list of files.
I'm new around here and I was exploring the configuration flow/data structures and I ran into the realization that when I have the following set of config files:
KUBECONFIG=~/.kube/config-1:~/.kube/config-2:~/.kube/config-3
A KubeConfigMerger is constructed as follows (I went for a JSON representaiton of the object):
{
"KubeConfigMerger.config": {
"ConfigNode": {
"name": "path-1",
"path": "path-1",
"value": {
"clusters": [
{
"ConfigNode": {
"name": "path-1/{\"cluster\": {\"server\": \"server-1\"}, \"name\": \"cluster-name-1\"}\n",
"path": "path-1",
"value": {
"cluster": {
"server": "server-1"
},
"name": "cluster-name-1"
}
}
},
{
"ConfigNode": {
"name": "path-2/{\"cluster\": {\"server\": \"server-2\"}, \"name\": \"cluster-name-2\"}\n",
"path": "path-2",
"value": {
"cluster": {
"server": "server-2"
},
"name": "cluster-name-2"
}
}
},
{
"ConfigNode": {
"name": "path-3/{\"cluster\": {\"server\": \"server-3\"}, \"name\": \"cluster-name-3\"}\n",
"path": "path-3",
"value": {
"cluster": {
"server": "server-3"
},
"name": "cluster-name-3"
}
}
}
],
"contexts": [
{
"ConfigNode": {
"name": "path-1/{\"context\": {\"cluster\": \"server-1\", \"namespace\": \"namespace-1\", \"user\": \"user-1\"}, \"name\": \"context-name-1\"}\n",
"path": "path-1",
"value": {
"context": {
"cluster": "server-1",
"namespace": "server-1",
"user": "user-1"
},
"name": "context-name-1"
}
}
},
{
"ConfigNode": {
"name": "path-2/{\"context\": {\"cluster\": \"server-2\", \"namespace\": \"namespace-2\", \"user\": \"user-2\"}, \"name\": \"context-name-2\"}\n",
"path": "path-2",
"value": {
"context": {
"cluster": "server-2",
"namespace": "server-2",
"user": "user-2"
},
"name": "context-name-2"
}
}
},
{
"ConfigNode": {
"name": "path-3/{\"context\": {\"cluster\": \"server-3\", \"namespace\": \"namespace-3\", \"user\": \"user-3\"}, \"name\": \"context-name-3\"}\n",
"path": "path-3",
"value": {
"context": {
"cluster": "server-3",
"namespace": "server-3",
"user": "user-3"
},
"name": "context-name-3"
}
}
}
],
"users": [
{
"ConfigNode": {
"name": "path-1/{\"user\": {\"token\": \"token-1\"}, \"name\": \"user-name-1\"}\n",
"path": "path-1",
"value": {
"user": {
"token": "token-1"
},
"name": "user-name-1"
}
}
},
{
"ConfigNode": {
"name": "path-2/{\"user\": {\"token\": \"token-2\"}, \"name\": \"user-name-2\"}\n",
"path": "path-2",
"value": {
"user": {
"token": "token-2"
},
"name": "user-name-2"
}
}
},
{
"ConfigNode": {
"name": "path-3/{\"user\": {\"token\": \"token-3\"}, \"name\": \"user-name-3\"}\n",
"path": "path-3",
"value": {
"user": {
"token": "token-3"
},
"name": "user-name-3"
}
}
}
],
"current_context": "current-context"
}
}
}
}
So now the question is: Is the main ConfigNode whose atributes path and name take the name of the first file structured as intended?
I can see that the KubeConfigMerger.config is consumed by KubeConfigLoader as a ConfigNode, so I'm guessing those two attributes are just a consequence of the ConfigNode requiring them and are probably set there to not much purpose?
/assign @ralberrto
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.