ProxyInjector
ProxyInjector copied to clipboard
spec.template.spec.containers[0].image: Required value
After upgrading to 0.0.23 I get following error when I want to inject a proxy container:
time="2019-12-31T18:03:23Z" level=error msg="Deployment.apps \"http-svc\" is invalid: spec.template.spec.containers[0].image: Required value"
time="2019-12-31T18:03:23Z" level=info msg="Updated service... http-svc"
This is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-svc
annotations:
authproxy.stakater.com/enabled: "true"
authproxy.stakater.com/redirection-url: http://hello.xxxxx.com
authproxy.stakater.com/resources: uri=/*|roles=g-xxxx-Admin|require-any-role=true
authproxy.stakater.com/source-service-name: "http-svc"
authproxy.stakater.com/target-port: "3000"
authproxy.stakater.com/upstream-url: http://127.0.0.1
spec:
replicas: 1
selector:
matchLabels:
app: http-svc
template:
metadata:
labels:
app: http-svc
spec:
containers:
- name: http-svc
image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc
labels:
app: http-svc
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc
Nevermind, I had a typo in the config.
Hey ,
Having the same issue.
What was the typo ?
@huegelc can you help here?
@Stolr
I had a typo in my deployment yaml. This is a working example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-svc-v2
annotations:
authproxy.stakater.com/enabled: "true"
authproxy.stakater.com/redirection-url: https://hello.example.com
authproxy.stakater.com/resources: uri=/*|roles=g-xxxx-Admin|require-any-role=true
authproxy.stakater.com/source-service-name: http-svc
authproxy.stakater.com/target-port: "3000"
authproxy.stakater.com/upstream-url: http://127.0.0.1:8080
spec:
replicas: 1
selector:
matchLabels:
app: http-svc-v2
template:
metadata:
labels:
app: http-svc-v2
spec:
containers:
- name: http-svc-v2
image: "gcr.io/kubernetes-e2e-test-images/echoserver:2.1"
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc-v2
labels:
app: http-svc-v2
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc-v2
This is still an issue for me
@echel0n @Stolr can you confirm that this is only for v0.0.23? Also plz share
- k8s version
- yaml manifests you are using
@usamaahmadkhan : I only tried with v0.0.23 not with earlier version ( Earlier was not working https://github.com/stakater/ProxyInjector/issues/36 ) .
K8S : 1.16.4
Configmap :
proxyconfig:
gatekeeper-image : "keycloak/keycloak-gatekeeper:6.0.1"
client-id: "k8s"
client-secret: ${CLIENTSECRET}
enable-default-deny: true
secure-cookie: false
verbose: true
enable-logging: true
listen: 0.0.0.0:80
cors-origins:
- '*'
cors-methods:
- GET
- POST
resources:
- uri: '/*'
scopes:
- 'good-service'
( Converted to a secret )
proxyinjector
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: proxyinjector
group: com.stakater.platform
provider: stakater
version: v0.0.23
chart: "proxyinjector-v0.0.23"
release: "release-name"
heritage: "Tiller"
name: proxyinjector
namespace: security
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: proxyinjector
group: com.stakater.platform
provider: stakater
version: v0.0.23
chart: "proxyinjector-v0.0.23"
release: "release-name"
heritage: "Tiller"
name: proxyinjector-role
namespace: security
rules:
- apiGroups:
- ""
- "extensions"
- "apps"
resources:
- deployments
- daemonsets
- statefulsets
- services
- configmaps
verbs:
- list
- get
- watch
- update
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: proxyinjector
group: com.stakater.platform
provider: stakater
version: v0.0.23
chart: "proxyinjector-v0.0.23"
release: "release-name"
heritage: "Tiller"
name: proxyinjector-role-binding
namespace: security
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: proxyinjector-role
subjects:
- kind: ServiceAccount
name: proxyinjector
namespace: security
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: proxyinjector
group: com.stakater.platform
provider: stakater
version: v0.0.23
chart: "proxyinjector-v0.0.23"
release: "release-name"
heritage: "Tiller"
name: proxyinjector
namespace: security
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: proxyinjector
group: com.stakater.platform
provider: stakater
template:
metadata:
labels:
app: proxyinjector
group: com.stakater.platform
provider: stakater
spec:
containers:
- env:
- name: CONFIG_FILE_PATH
value: "/etc/ProxyInjector/config.yml"
image: "stakater/proxyinjector:v0.0.23"
imagePullPolicy: IfNotPresent
name: proxyinjector
volumeMounts:
- mountPath: /etc/ProxyInjector
name: config-volume
serviceAccountName: proxyinjector
volumes:
- secret:
secretName: proxyinjector
name: config-volume
And here is a deployment I'm trying to anotate
apiVersion: apps/v1
kind: Deployment
metadata:
name: clusterinfo
namespace: kube-system
labels:
app: clusterinfo
annotations:
authproxy.stakater.com/enabled: "true"
authproxy.stakater.com/redirection-url: https://sso.tolron.fr
authproxy.stakater.com/source-service-name: clusterinfo
authproxy.stakater.com/target-port: "3000"
authproxy.stakater.com/upstream-url: http://127.0.0.1:8080
spec:
replicas: 1
selector:
matchLabels:
app: clusterinfo
template:
metadata:
labels:
app: clusterinfo
spec:
containers:
- name: clusterinfo
image: "stolron/clusterinfo"
imagePullPolicy: Always
I am also having the same issue here. Not sure what's wrong.
nvm i managed to get it working. For those who are having the same issue, you can remove the proxyconfig: in the configmap. So that it look like the one below for example.
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: proxyinjector
version: v0.0.23
group: com.stakater.platform
provider: stakater
chart: "proxyinjector-v0.0.23"
release: "proxyinjector"
heritage: "Tiller"
name: proxyinjector
data:
config.yml: |-
gatekeeper-image : "keycloak/keycloak-gatekeeper:6.0.1"
enable-default-deny: true
secure-cookie: false
verbose: true
enable-logging: true
cors-origins:
- '*'
cors-methods:
- GET
- POST
resources:
- uri: '/*'
scopes:
- 'good-service'
Is there any news about this ?
@Stolr remove the proxyconfig: in the configmap like described by @kw-jk above. PRs are welcome for a permanent fix. :)
Awesome @usamaahmadkhan thanks !
I am also getting same issue: ERROR::
Error: cannot patch "search-manager" with kind Deployment: Deployment.apps "search-manager" is invalid: spec.template.spec.containers[1].image: Required value │ │ with module.service_e2e.helm_release.main, │ on ../../../../modules/search-manager/helm.tf line 1, in resource "helm_release" "main": │ 1: resource "helm_release" "main" {
Deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "alto-default.fullname" . }} labels: {{- include "alto-default.labels" . | nindent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: {{- include "alto-default.selectorLabels" . | nindent 6 }} template: metadata: labels: {{- include "alto-default.labels" . | nindent 8 }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} serviceAccountName: {{ include "alto-default.serviceAccountName" . }} {{- if .Values.topologySpreadConstraints }} topologySpreadConstraints: {{- include "alto-default.tplvalues.render" (dict "value" .Values.topologySpreadConstraints "context" .) | nindent 8 }} {{- end }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ include "alto-default.fullname" . }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} {{- include "alto-default.envVars" . | nindent 10 }} ports: - name: http containerPort: {{ .Values.service.targetPort }} protocol: TCP {{- if and .Values.metrics.enabled .Values.service.metricsPort }} - name: metrics containerPort: {{ .Values.service.metricsPort }} protocol: TCP {{- end }} livenessProbe: httpGet: path: {{ .Values.health.livenessProbe.path }} port: http periodSeconds: {{ .Values.health.livenessProbe.periodSeconds }} initialDelaySeconds: {{ .Values.health.livenessProbe.initialDelaySeconds }} timeoutSeconds: {{ .Values.health.livenessProbe.timeoutSeconds }} failureThreshold: {{ .Values.health.livenessProbe.failureThreshold }} successThreshold: {{ .Values.health.livenessProbe.successThreshold }} readinessProbe: httpGet: path: {{ .Values.health.readinessProbe.path }} port: http periodSeconds: {{ .Values.health.readinessProbe.periodSeconds }} initialDelaySeconds: {{ .Values.health.readinessProbe.initialDelaySeconds }} timeoutSeconds: {{ .Values.health.readinessProbe.timeoutSeconds }} failureThreshold: {{ .Values.health.readinessProbe.failureThreshold }} successThreshold: {{ .Values.health.readinessProbe.successThreshold }} resources: {{- toYaml .Values.resources | nindent 12 }} {{- if .Values.efs.id }} volumeMounts: - name: efs-data mountPath: /mnt/data {{- end }} {{- if .Values.sidecars }} {{- include "alto-default.tplvalues.render" (dict "value" .Values.sidecars "context" $) | nindent 8 }} {{- end }} {{- if .Values.efs.id }} - name: aws-gcp-configmap-volume mountPath: /var/run/secrets volumes: - name: efs-data persistentVolumeClaim: claimName: {{ include "alto-default.fullname" . }} {{- end }} {{- with .Values.nodeSelector }} - name: aws-gcp-configmap-volume configMap: name: aws-gcp-config nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }}
Configmap.yaml
kind: ConfigMap apiVersion: v1 metadata: name: aws-gcp-config data: aws-gcp-provider-us-qa.json: | { "type": "external_account", "audience": "//iam.googleapis.com/projects/1092006856739/locations/global/workloadIdentityPools/aws-pool-search-manager/providers/aws-pool-searchmanger-provide", "subject_token_type": "urn:ietf:params:aws:token-type:aws4_request", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken", "token_url": "https://sts.googleapis.com/v1/token", "credential_source": { "environment_id": "aws1", "region_url": "http://169.254.169.254/latest/meta-data/placement/availability-zone", "url": "http://169.254.169.254/latest/meta-data/iam/security-credentials", "regional_cred_verification_url": "https://sts.{region}.amazonaws.com?Action=GetCallerIdentity&Version=2011-06-15" } }