ack iam watchNamespace deny
Is your feature request related to a problem? We have two ACK IAM controllers running in a cluster, with two different IAM policies attached to each. We want one of them to watch some namespaces (ideally with a wildcard pattern) and the other to watch all other ones, excluding the ones already watched by the 1st one
Describe the solution you'd like
- to be able to select multiple namespaces
- to be able to exclude multiple namespaces
Describe alternatives you've considered having both just watch a single namespace, which is not ideal
@FernandoMiguel Hi! What task / problem do you try to solve?
I have almost the same consideration. As an admin I want to have many Amazon accounts (representing different environments - like QA, Dev, UAT, Prod...) and put the corresponding manifests into separate namespaces (like infra-qa, infra-dev, infra-uat, infra-prod...). I think it could be achieved right now. Also I want you to think about two things:
- wouldn't it be better to not just exclude namespaces by list of names or just mask, but to label namespace somehow and then select the target namespaces based on annotations and / or labels? Almost the same like multiple ingresses or storage drivers could co-exist in one cluster? So effectively we will get something like:
apiVersion: v1
kind: Namespace
metadata:
name: infra-prod-tenant-1
annotations:
iam.aws.controller.com: controller-1
---
apiVersion: v1
kind: Namespace
metadata:
name: infra-prod-tenant-2
annotations:
iam.aws.controller.com: controller-1
---
apiVersion: v1
kind: Namespace
metadata:
name: infra-dev-tenant-1
annotations:
iam.aws.controller.com: controller-dev
---
and this label/annotation will be passed somehow to the controller itself?
- what about RBAC? Because when IAM controller will watch just a single namespace, it would be easy to write a RBAC rule. But if you want to manage like unknown number of namespaces (defined by wildcard mask), you will be forced to use a cluster wide RBAC as k8s RBAC does not support masks... so it will lead to the very difficult to identify issues
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale