kubernetes-replicator
kubernetes-replicator copied to clipboard
Proposal: Configure replication with CR
Acknowledgement This idea was originally presented by @HansK-p in #40.
Issue
Sometimes, deployment tools (or something as simple as kubectl apply
) mess with already deployed (and replicated) secrets or config maps. This is especially the case for secrets of type other than Opaque
: These have to be created with empty initial values (like tls.crt=""
for secrets of type kubernetes.io/tls
), which are then overridden by the replicator (and then overridden again by the next deployment). This is the main issue in #40, and also influential in #11, #23 and #28.
Another issue is that some source secrets are created automatically (by cert-manager, for example) without the option of adding custom annotations (also in #40).
Proposed solution Introduce a new custom resource that describes secret (or configmap) replication, instead of using annotations:
apiVersion: replicator.mittwald.systems/v1
kind: ReplicationConfig
metadata:
name: target-secret
spec: # written by user
source:
namespace: some-namespace
name: some-source-secret
kind: Secret
status: # written by replicator
phase: Replicated
target:
namespace: my-namespace
name: target-secret
kind: Secret
The replicator would observe these CRs and create the replicated secrets by itself. This would have the advantage that the replicated secrets themselves would be created and fully owned by the replicator itself (without any deployments tools messing with them). We could also ensure that they are created with the correct secret type to begin with.
Roadmap
To build this, we might want to consider migrating this controller to the Operator SDK. We would probably be able to port and mostly keep the main reconciliation loops as they are. Then, a new controller could be added to handle the ReplicationConfig
CRs.
I must admit I didn't expect to be asked to evaluate/think about a design. I would say it is tempting to reuse the existing design.
So maybe a CRD which allows secrets being replicated from a source namespace, something like:
apiVersion: replicator.mittwald.systems/v1 kind: ReplicationSourceConfig metadata: name: source-secrets namespace: source-namespace spec: # written by user sources: - name: some-source-secret1 kind: Secret - name: some-source-secret2 kind: Secret allowedNamespaces: # Allow both specific namespaces and patterns - name: target-namespace1 - name: target-namespace2 - name: ingress-.* status: # written by replicator sources: - name: some-source-secret1 found: True/False version: "32971510" - name: some-source-secret2 found: True/False version: "32432434"
And a CRDs for fetching secrets:
apiVersion: replicator.mittwald.systems/v1 kind: ReplicationTargetConfig metadata: name: target-secrets namespace: target-namespace1 spec: # written by user source: namespace: source-namespace name: some-source-secret1 kind: Secret targets: # There can be more than one target, although I don't have a use case for it - name: target-secret1 - name: target-secret2 status: # written by replicator targets: - name: target-secret1 kind: Secret phase: Replicated replicatedAt: "2020-05-01T18:56:25Z" replicatedFromVersion: "32971510" - name: target-secret2 kind: Secret phase: Replicated replicatedAt: "2020-05-01T18:56:26Z" replicatedFromVersion: "32971510"
It should not be a technical problem to add CRDs which can solve also other user cases, like the one suggested. The following CRD could solve #5:
apiVersion: replicator.mittwald.systems/v1 kind: ReplicationConfig metadata: name: target-secret namespace: some-namespace spec: # written by user source: name: some-source-secret kind: Secret targets: - namespace: some-destination-namespace1 name: some-target-secret - namespace: some-destination-namespace2 name: some-target-secret status: # written by replicator source: status: Found version: "4353453453" targets: - namespace: some-destination-namespace1 name: some-target-secret kind: Secret phase: Replicated replicatedAt: "2020-05-01T18:56:25Z" replicatedFromVersion: "32971510" - namespace: some-destination-namespace2 name: some-target-secret kind: Secret phase: Replicated replicatedAt: "2020-05-01T18:56:26Z" replicatedFromVersion: "32971510"
I'm extremely far from being an export on CRDs, etc. So I can't guarantee that these suggestions will actually work.
I've (finally) written a Golang operator based on the Operator SDK. This operator gracefully restart deployments, etc after a configmap or secret has been changed.
I'm not doing a lot of programming, so the code isn't too good and can probably be optimized a lot, but.....
- Seems like the testing framework is pretty ok in the Operator SDK (but I haven't got a lot to compare with)
- It should be fairly easy to use the Operator SDK in order to enable CRDs for kubernetes-replicator But the "should be fairly easy" is just my guess. I've only had a 5 minute look at the existing code.
There has not been any activity to this issue in the last 30 days. It will automatically be closed after 7 more days. Remove the stale
label to prevent this.
There has not been any activity to this issue in the last 14 days. It will automatically be closed after 7 more days. Remove the stale
label to prevent this.
There has not been any activity to this issue in the last 14 days. It will automatically be closed after 7 more days. Remove the stale
label to prevent this.
There has not been any activity to this issue in the last 14 days. It will automatically be closed after 7 more days. Remove the stale
label to prevent this.
There has not been any activity to this issue in the last 14 days. It will automatically be closed after 7 more days. Remove the stale
label to prevent this.
There has not been any activity to this issue in the last 14 days. It will automatically be closed after 7 more days. Remove the stale
label to prevent this.
There has not been any activity to this issue in the last 14 days. It will automatically be closed after 7 more days. Remove the stale
label to prevent this.
There has not been any activity to this issue in the last 14 days. It will automatically be closed after 7 more days. Remove the stale
label to prevent this.
Sounds very good, May can I help with anything?
We are very interested in this feature, since currently the kubernetes-replicator does not play nicely with cluster-updating tools like ArgoCD. Is there any plan to move forward with this proposal? Would you be interested if we provide a PR with the needed refactorings?
Is there any plan to move forward with this proposal?
@makeItFuckingSustainable I'm still very much in favour of this idea, but -- to be honest -- probably might not find the time to actually implement it myself in the immediate future.
Would you be interested if we provide a PR with the needed refactorings?
Absolutely! Any contribution towards implementing this proposal would be most welcome! 😍
just my 2 cents to this, if ever going forward on this: please allow the CR's being wrapped in annotated configmaps, at my company teams dont have cluster permissions.