falcon-operator
falcon-operator copied to clipboard
Opeshift 4.14 unable to deploy FalconNodeSensor
Hello, I'm trying to deploy FalconNodeSensor on OpenShift 4.14, operator installed ok but I cannot create FalconNodeSensor as I get strange error: Not Found. Latest version of operator from OperatorHub
Removed old operator, crds, roles, roles bindings, sa and everything I could find, reinstalled operator, still gives me a finger.
@siwyroot
have a similar problem - installation hangs - thanks for your hint
I checked via ComandLine and found some entrys
$ ./oc get ns | grep -i falc
$
$ ./oc get sa | grep -i falc
$
$ ./oc get crd | grep -i falc
$
$ ./oc get roles | grep -i falc
$
If there any entrys - delete
./oc delete .....
example:
./oc delete crd falconadmissions.falcon.crowdstrike.com
But now I like to install the Node Sensor:
@n00bsi to fix this:
- Uninstall operator
- delete all crds with falcon in name
- delete permission related objects oc delete $(oc get clusterrole,clusterrolebinding -l crowdstrike.com/created-by=falcon-operator -o name)
- Depending on your OS you have to delete ~/.kube/config if you are using flux/argocd restart pods that apply yamls
- Reinstall operator, not you can apply node sensor via cmd (should work via UI) pods will start in different namespace now (falson-system) which will be created by operator
Assuming that the resources where cleanup before the operator was uninstalled, there is an issue in OpenShift where the resources are not always cleaned up internally in a timely manner which requires you to wait for a while for the kube api to cleanup delete resources internally in openshift before it will allow the operator to work. Also you must clean up all the resources on the cluster. See https://github.com/CrowdStrike/falcon-operator/tree/main/docs/deployment/openshift#uninstall-the-operator-1
Looks like the issue was resolved for everyone, closing. The OpenShift cleanup steps were improved in #554.