KubeArmor
KubeArmor copied to clipboard
feat: support toleration config
Purpose of PR?:
Fixes #1720
Does this PR introduce a breaking change?
Probably not.
If the changes in this PR are manually verified, list down the scenarios covered::
Additional information for reviewer? :
I'm a newbie developer operator CRD. I need suggestions about coding style or any important that I missed.
BTW, I haven't written a test yet, need advice.
Checklist:
- [x] Bug fix. Fixes #1720
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [x] This change requires a documentation update
- [x] PR Title follows the convention of
<type>(<scope>): <subject>
- [ ] Commit has unit tests
- [ ] Commit has integration tests
@tico88612 please rebase to the main and squash the commits.
@rksharma95 rebased. BTW I'm a newbie developer operator CRD. I need suggestions about coding style or any important that I missed.
@rksharma95 rebased. BTW I'm a newbie developer operator CRD. I need suggestions about coding style or any important that I missed.
the changes looks good to me :+1: , let me know if you have any specific question i will try best to answer that.
- Manual Verify on Maintainers End // @rksharma95
@tico88612 I tried to test the PR, seems like there are some issues, operator is not able to handle the toleration config.
kubearmorconfig
spec:
kubeRbacProxyImage:
imagePullPolicy: Always
kubearmorControllerImage:
image: kubearmor/kubearmor-controller:latest
imagePullPolicy: Always
kubearmorImage:
image: kubearmor/kubearmor:stable
imagePullPolicy: Always
kubearmorInitImage:
image: kubearmor/kubearmor-init:stable
imagePullPolicy: Always
kubearmorRelayImage:
image: kubearmor/kubearmor-relay-server:latest
imagePullPolicy: Always
kubearmorRelayToleration:
- effect: NoSchedule
key: arch
operator: Equal
value: amd64
Relay Pod:
> kubectl get pods -n kubearmor kubearmor-relay-8464877449-t6gcr -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kubearmor-relay-8464877449-t6gcr 1/1 Running 0 23h 10.84.3.240 gke-ai-ml-test-ka-default-pool-8482024a-1kmx <none> <none>
> kubectl get pod -n kubearmor kubearmor-relay-8464877449-t6gcr -o jsonpath='{.spec.tolerations}'
[{"effect":"NoExecute","key":"node.kubernetes.io/not-ready","operator":"Exists","tolerationSeconds":300},{"effect":"NoExecute","key":"node.kubernetes.io/unreachable","operator":"Exists","tolerationSeconds":300}]
Node:
> kubectl get node gke-ai-ml-test-ka-default-pool-8482024a-1kmx -o jsonpath='{.spec.taints}'
[{"effect":"NoSchedule","key":"arch","value":"arm64"}]
What are some of the things I've missed about the modifications?
What are some of the things I've missed about the modifications?
I don't see anything missing, there might be something not working as expected.
@tico88612 any update here? let me know if you need any assistance.
Hi @rksharma95 I'm sorry for not getting back to you sooner. As I haven't read the development guide before, please give me more time to study.
@tico88612 is there any update? Are you still working on it?
@rksharma95 Sorry for the slow reply, but since I haven't had time to work on this lately, I've decided to release the work to other potential contributors!