charts
charts copied to clipboard
possibility to update the image name to private repository
Name and Version
bitnami/oauth2-proxy:3.2.1
What is the problem this feature will solve?
Hello,
We use bitnami ouath2-proxy for 2 years as there is some custom request came in for a project we have to update the docker image to bake in additional feature, we are able to bake in the feature and push to private repository. but while deploy it gives one nasty issue which.
I am using helm version 3.9.1 to deploy applications. but while deploy it get's incorrect name of the image and which cause overall failure of the deployment.
so during local development in docker-desktop it comes as
docker.local/docker.local/test-oauth2:d3b2d39e81d38e4696afba
which makes pod not to complete startup oauth2 values.yaml look like below :
oauth2-proxy:
enabled: true
image:
registry: 'docker.local'
repository: 'test-oauth2'
pullPolicy: Always
tag: null
config:
existingSecret: oauth2-proxy
extraArgs:
- --provider=oidc
- --scope=openid offline_access profile email read:current_user update:current_user_metadata read:user_idp_tokens
< redacted >
I tried to give registry as ''
then the pod get's the image name as
/docker.local/test-oauth2:d3b2d39e81d38e4696afba
On further analysis and debug got to know that it's coming from _images.tpl
{{/* vim: set filetype=mustache: */}}
{{/*
Return the proper image name
{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" $) }}
*/}}
{{- define "common.images.image" -}}
{{- $registryName := .imageRoot.registry -}}
{{- $repositoryName := .imageRoot.repository -}}
{{- $separator := ":" -}}
{{- $termination := .imageRoot.tag | toString -}}
{{- if .global }}
{{- if .global.imageRegistry }}
{{- $registryName = .global.imageRegistry -}}
{{- end -}}
{{- end -}}
{{- if .imageRoot.digest }}
{{- $separator = "@" -}}
{{- $termination = .imageRoot.digest | toString -}}
{{- end -}}
{{- printf "%s/%s%s%s" $registryName $repositoryName $separator $termination -}}
{{- end -}}
but certainly not able to override the function of the helm chart
What is the feature you are proposing to solve the problem?
Users using the private repo with updated dockerfile would be able to use private repo images.
What alternatives have you considered?
if it's not possible then moving back to oauth2-proxy olderversion where common was not up to date. or using someother oauth2-proxy.
What is the combination of registry, repository and tag you would like to use? In the case of the bitnami images, by default, the registry is set to docker.io
, the repository to bitnami/oauth2-proxy
(which is the path) and then the tag.
For example, if I have my own registry placed at docker.local
and the path for the image inside that registry is carrodher/oauth2
, being the tag latest
I can use something like
image:
registry: 'docker.local'
repository: 'carrodher/oauth2'
tag: latest
This should generate the following image URL: docker.local/carrodher/oauth2:latest
We can test if the deployment will effectively use that image URL by running
$ helm template bitnami/oauth2-proxy -f myvalues.yaml | grep image:
image: docker.local/carrodher/oauth2:latest
image: docker.io/bitnami/redis:7.0.4-debian-11-r11
We can see how the main image was modified to use the custom URL
Hi,
I completely understand your thought process, ideally, it should have done it in a way but it does not work like the code is implemented.
I will say tryout locally you will encounter the same issue as me.
here is what the local file looks like :
oauth2:
enabled: true
image:
registry: 'docker.local'
repository: 'test-oauth2'
pullPolicy: Always
tag: null
It's placed at the location ./config/local/values.yaml as mentioned by you I tried to
helm template bitnami/oauth2-proxy -f ./config/local/values.yaml | grep image:
image: docker.io/bitnami/oauth2-proxy:7.3.0-debian-11-r23
image: docker.io/bitnami/redis:7.0.4-debian-11-r11
my cloud repository is aws and there we don't have any registry, we only have a repository, so when we add a registry as '' and give the repository the value then additional / will be appended by default.
oauth2:
enabled: true
image:
registry: ''
repository: '<aws-blah-blah.region.xyz>/test-oauth2'
pullPolicy: Always
tag: latest
I hope I am able to add the additional details required, let me know so we can find out the solution for it.
In bitnami, an assumption is taken that we should always have a registry, possibly it's not true for all the cases.
It's working in version 1.3.0, I want to update the charts but it's no more possible.
I will say try out locally you will encounter the same issue as me.
here is what the local file looks like :
oauth2: enabled: true image: registry: 'docker.local' repository: 'test-oauth2' pullPolicy: Always tag: null
It's placed at the location ./config/local/values.yaml as mentioned by you I tried to
helm template bitnami/oauth2-proxy -f ./config/local/values.yaml | grep image: image: docker.io/bitnami/oauth2-proxy:7.3.0-debian-11-r23 image: docker.io/bitnami/redis:7.0.4-debian-11-r11
Please note you're placing the image
block under another oauth2
section. That is done when the values file is applied to a subchart named oauth2
so the specified parameters are applied to that subchart. In the above example, you are placing the image
block to be used in a subchart but then you are running helm template
directly with the main chart. You should place the image
block in the first/root level of the values file instead of inside the oauth2
block.
my cloud repository is aws and there we don't have any registry, we only have a repository, so when we add a registry as '' and give the repository the value then additional / will be appended by default.
oauth2: enabled: true image: registry: '' repository: '<aws-blah-blah.region.xyz>/test-oauth2' pullPolicy: Always tag: latest
In that case, aws-blah-blah.region.xyz
is the registry so you should be able to use the following values file to render a proper image URL:
image:
registry: 'aws-blah-blah.region.xyz'
repository: 'test-oauth2'
pullPolicy: Always
tag: latest
$ helm template bitnami/oauth2-proxy -f values.yaml | grep image:
image: aws-blah-blah.region.xyz/test-oauth2:latest
image: docker.io/bitnami/redis:7.0.4-debian-11-r11
Hey
I understand the location of oauth2 need to be at the global level, as our project use other helm charts like postgres from bitnami it's not a good idea to keep it at the global level because it will certainly overide the value.
In our case we don't use registry we use only the repository. so final outcomes comes like below after referring your code
{{- printf "%s/%s%s%s" $registryName $repositoryName $separator $termination -}}
/aws-blah-blah.region.xyz/test-oauth2:SHA256asdfasdfsafasd
the way now helm charts defined in common can work only when we are referinng standard helm charts without any modification.
See the second part of the previous answer, you should be able to set aws-blah-blah.region.xyz
as the registry, using test-oauth2
as the repository in order to form the full image URL
Hi,
I did tried with the solution you have suggested but could not make it work.
it comes as aws-blah-blah.region.xyz/aws-blah-blah.region.xyz/test-oauth2:SHA256asdfasdfsdf
That's because probably you are defining something at the global
level:
global:
imageRegistry: ""
If something is set in the global
parameter, it will affect the main chart and any subchart
It seems it is not an issue related to the Bitnami oauth2-proxy container image or Helm chart but about how the application or environment is being used/configured.
For information regarding the application itself, customization of the content within the application, or questions about the use of technology or infrastructure; we highly recommend checking forums and user guides made available by the project behind the application or the technology.
That said, we will keep this ticket open until the stale bot closes it just in case someone from the community adds some valuable info.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.