k8s-ecr-login-renew
k8s-ecr-login-renew copied to clipboard
Namespaces blacklist
Hi, I'd like your tiny tool to create the ECR secret in EVERY namespace EXCEPT those contained in a blacklist. Is it difficult to implement?
Thanks for your attention
This is a good idea and I don't think it would be too hard to implement. It might be a few weeks before I have free time to work on it though.
Since the namespace list already supports wildcards, I'm thinking this could be implemented as:
- Create a new EXCLUDED_NAMESPACE env var that follows a similar syntax as TARGET_NAMESPACE
- Update the logic to in
k8s.GetNamespaces()
to take exclusions into account.
Having the updated Secret in every namespace except the system ones would be great!
Hi, the following docker image contains support for a new env var called EXCLUDE_NAMESPACE
. Syntax is same as TARGET_NAMESPACE
. Would you be willing to try it out? The latest helm chart will have support for modifying the cronjob.dockerImage
value and the excludeNamespace
value.
https://hub.docker.com/layers/nabsul/k8s-ecr-login-renew/sha-907a3cb/images/sha256-231227c7fdb093ea65c5809ec28eebd15a5c8994f08e206c90300e2e5b7514d7?context=explore
Great! Actually, I’m running k8s-ecr-login-renew in a dedicated namespace and syncing the Secret across all other namespaces (except system ones) by a Kyverno policy but doing all by a single tool could be useful. I will try new release in the next week and keep you informed.
Thank you
@nabsul TARGET_NAMESPACE='*' and EXCLUDE_NAMESPACES="kube-system, ..." solve the issue to sync Secret across all namespaces except system ones but don't solve the issue to have the Secret ready immediately after a new namespace creation. In the worst case, I would need to wait for next CronJob run (usually some hours).
Instead, actual solution (using Kyverno) immediately syncs the Secret in new namespaces.
@mimmus There is a way to trigger a cronjob to run immediately: https://github.com/nabsul/k8s-ecr-login-renew#test-the-cron-job
I should trigger it when a new namespace is created but I have no control when it happens.
Ah, I see. That is definitely beyond the capabilities of this tool. Since the tool itself is just a cron job, it doesn't have a way to watch for namespace changes and trigger itself. This will have to be done from some other tool, like kyverno which you mentioned.
This is still really useful. Thank you for implementing it!
excludeNamespace / EXCLUDE_NAMESPACE seems not work when set to "kube-,cattle-" in my case?
Here is my helm values.yaml and installed manifest,
~$ cat values.aws-123456789012.yaml
# These are the most common parameters that you might want to change.
# This value is required. It determines which AWS region to use.
awsRegion: us-east-1
# Leave these values empty (null) if you are manually pre-creating the AWS secrets
# If you supply these values, a secret will be created from them
awsAccessKeyId: null
awsSecretAccessKey: null
# The name of the secret to create containing the Docker credentials for ECR.
dockerSecretName: aws-ecr-123456789012-us-east-1
# Comma-separated list of target Namespaces to create docker secrets in.
targetNamespace: "*"
excludeNamespace: "kube-*,cattle-*"
# By default the tool will create credentials for: https://[ACCOUNT_ID].dkr.ecr.[region].amazonaws.com
# If you need credentials for multiple ECR instances (for different regions for example), you can populate this value with a comma separated list.
# Example: registries: https://ACCOUNT_2_ID.dkr.ecr.us-east-1.amazonaws.com,https://ACCOUNT_2_ID.dkr.ecr.us-east-1.amazonaws.com
registries: null
cronjob:
dockerImage: nabsul/k8s-ecr-login-renew:v1.7.1
# the schedule of the cronjob
schedule: "0 */6 * * *"
# Success job history limit
successfulJobsHistoryLimit: 3
# Failed job history limit
failedJobsHistoryLimit: 5
# The deadline afterwhich running the cronJob will be skipped. See https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations for defaults and details.
startingDeadlineSeconds: null
# Termination grace period Seconds
terminationGracePeriodSeconds: 0
# Set this to false in order to generate raw YAML that can be used without Helm
# The main difference is that the default Helm labels are not added to the objects.
forHelm: true
# Below are less commonly changed parameters:
# The names of the various Kubernetes objects that will be created
names:
job: k8s-ecr-login-renew-job
cronJob: k8s-ecr-login-renew-cron
serviceAccount: k8s-ecr-login-renew-account
clusterRole: k8s-ecr-login-renew-role-123456789012
clusterRoleBinding: k8s-ecr-login-renew-binding-123456789012
# AWS is accessed using an access key ID and secret access key.
# You can either pre-populate a Kubernetes secret with this information,
# or provide them as Helm values for the secret to be automatically created.
# Set `aws` to null if you will not be authenticating with environment variables.
aws:
secretName: 'k8s-ecr-login-renew-aws-secret'
secretKeys:
accessKeyId: 'AWS_ACCESS_KEY_ID'
secretAccessKey: 'AWS_SECRET_ACCESS_KEY'
# Pod annotations for the pods that are created by the cronjob
podAnnotations: {}
# `awsAccessKeyId` and `awsSecretAccessKey` has already set in env
~$ helm upgrade --install k8s-ecr-login-renew nabsul/k8s-ecr-login-renew \
--namespace aws-123456789012 \
--set "awsAccessKeyId=${awsAccessKeyId}" \
--set "awsSecretAccessKey=${awsSecretAccessKey}" \
--values values.aws-123456789012.yaml
Release "k8s-ecr-login-renew" does not exist. Installing it now.
NAME: k8s-ecr-login-renew
LAST DEPLOYED: Fri Mar 29 12:19:27 2024
NAMESPACE: aws-123456789012
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Congratulations! k8s-ecr-login-renew should now be setup to run in your cluster.
It might be a little while before the cron job gets executed on its schedule.
To kick off a manual run, type: kubectl create job --from=cronjob/k8s-ecr-login-renew-cron k8s-ecr-login-renew-cron-manual-1
~$ helm get manifest k8s-ecr-login-renew
---
# Source: k8s-ecr-login-renew/templates/001-ServiceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-ecr-login-renew-account
namespace: aws-123456789012
labels:
app.kubernetes.io/name: k8s-ecr-login-renew
helm.sh/chart: k8s-ecr-login-renew-1.0.5
app.kubernetes.io/instance: k8s-ecr-login-renew
app.kubernetes.io/version: 1.7.1
app.kubernetes.io/managed-by: Helm
---
# Source: k8s-ecr-login-renew/templates/004-Secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: k8s-ecr-login-renew-aws-secret
namespace: aws-123456789012
labels:
app.kubernetes.io/name: k8s-ecr-login-renew
helm.sh/chart: k8s-ecr-login-renew-1.0.5
app.kubernetes.io/instance: k8s-ecr-login-renew
app.kubernetes.io/version: 1.7.1
app.kubernetes.io/managed-by: Helm
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: xxxxxxxxx
AWS_SECRET_ACCESS_KEY: xxxxxxxxx
---
# Source: k8s-ecr-login-renew/templates/002-ClusterRole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-ecr-login-renew-role-123456789012
labels:
app.kubernetes.io/name: k8s-ecr-login-renew
helm.sh/chart: k8s-ecr-login-renew-1.0.5
app.kubernetes.io/instance: k8s-ecr-login-renew
app.kubernetes.io/version: 1.7.1
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- list
- apiGroups: [""]
resources:
- secrets
- serviceaccounts
- serviceaccounts/token
verbs:
- 'delete'
- 'create'
- 'patch'
- 'get'
---
# Source: k8s-ecr-login-renew/templates/003-ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-ecr-login-renew-binding-123456789012
labels:
app.kubernetes.io/name: k8s-ecr-login-renew
helm.sh/chart: k8s-ecr-login-renew-1.0.5
app.kubernetes.io/instance: k8s-ecr-login-renew
app.kubernetes.io/version: 1.7.1
app.kubernetes.io/managed-by: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-ecr-login-renew-role-123456789012
subjects:
- kind: ServiceAccount
name: k8s-ecr-login-renew-account
namespace: aws-123456789012
---
# Source: k8s-ecr-login-renew/templates/005-CronJob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: k8s-ecr-login-renew-cron
namespace: aws-123456789012
labels:
app.kubernetes.io/name: k8s-ecr-login-renew
helm.sh/chart: k8s-ecr-login-renew-1.0.5
app.kubernetes.io/instance: k8s-ecr-login-renew
app.kubernetes.io/version: 1.7.1
app.kubernetes.io/managed-by: Helm
spec:
schedule: "0 */6 * * *"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 5
jobTemplate:
spec:
template:
metadata:
labels:
app.kubernetes.io/name: k8s-ecr-login-renew
helm.sh/chart: k8s-ecr-login-renew-1.0.5
app.kubernetes.io/instance: k8s-ecr-login-renew
app.kubernetes.io/version: 1.7.1
spec:
serviceAccountName: k8s-ecr-login-renew-account
terminationGracePeriodSeconds: 0
restartPolicy: Never
containers:
- name: k8s-ecr-login-renew
imagePullPolicy: IfNotPresent
image: nabsul/k8s-ecr-login-renew:v1.7.1
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: k8s-ecr-login-renew-aws-secret
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: k8s-ecr-login-renew-aws-secret
key: AWS_SECRET_ACCESS_KEY
- name: AWS_REGION
value: us-east-1
- name: DOCKER_SECRET_NAME
value: aws-ecr-123456789012-us-east-1
- name: TARGET_NAMESPACE
value: "*"
- name: EXCLUDE_NAMESPACE
value: "kube-*,cattle-*"
~$ kubectl logs k8s-ecr-login-renew-cron-manual-1-7n8rc
Running at 2024-03-29 04:20:26.830118419 +0000 UTC
Fetching auth data from AWS... Success.
Docker Registries: https://123456789012.dkr.ecr.us-east-1.amazonaws.com
Updating kubernetes secret [aws-ecr-123456789012-us-east-1] in 11 namespaces
Updating secret in namespace [aws-123456789012]... success
Updating secret in namespace [cattle-fleet-system]... success
Updating secret in namespace [cattle-impersonation-system]... success
Updating secret in namespace [cattle-system]... success
Updating secret in namespace [default]... success
Updating secret in namespace [ingress-nginx]... success
Updating secret in namespace [kube-node-lease]... success
Updating secret in namespace [kube-public]... success
Updating secret in namespace [kube-system]... success
Updating secret in namespace [local]... success
Job complete.