vcluster
vcluster copied to clipboard
vcluster overrides certain environment variables
What happened?
When a user-defined Pod env var matches "$SERVICENAME_PORT" pattern, where $SERVICENAME is a name of a Service that exists in the same namespace, the value set by the user is overridden with $PROTOCOL://$HOST:$PORT
.
What did you expect to happen?
The env var explicitly set by the user should take precedence and should not be overridden by the vcluster.
How can we reproduce it (as minimally and precisely as possible)?
Create this two manifests in vcluster one by one:
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: test
spec:
selector:
app: testy-woo
ports:
- protocol: TCP
port: 80
targetPort: 9376
apiVersion: v1
data:
LOG_LEVEL: INFO
REDIS_HOST: redis
REDIS_PORT: "6379"
kind: ConfigMap
metadata:
labels:
pipekit.io/service: testy-woo
name: testy-woo
namespace: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testy-woo
namespace: test
labels:
app: testy-woo
spec:
replicas: 1
selector:
matchLabels:
app: testy-woo
template:
metadata:
labels:
app: testy-woo
spec:
containers:
- envFrom:
- configMapRef:
name: testy-woo
name: testy-woo
image: nginx:stable
Wait for the nginx pod to start and run:
kubectl exec -n test deployments/testy-woo -- env | grep REDIS_PORT=
You will see something like REDIS_PORT=tcp://10.96.115.242:80
, but instead it should be REDIS_PORT=6379
.
vlcuster version
This is rather difficult to fix as it seems that .env
is taking precedence over .envFrom
and since we are adding the service discovery environment variables via .env
it automatically overrides the ones from .envFrom
. To solve the problem we would need to either find out in the beginning what environment variables are present in the config map / secret (which could change, so wouldn't be 100% reliable) or we omit the service discovery environment variables completely (which potentially breaks stuff that relies on this, how important are those though?). So I'm not sure what the way forward here would be for now.