wazuh-kubernetes
wazuh-kubernetes copied to clipboard
Issues in local deployment with Minikube with 4.2.2
I was doing some tests of the local deployment with Minikube and I found some issues:
-
the amount of resources required is below what is necessary: https://documentation.wazuh.com/current/deploying-with-kubernetes/kubernetes-local-env.html#resource-requirements According to my tests, at least 10 GB of Memory and 8 CPUS are needed
-
Modify the amount of resource limit for Elasticsearch pods: current: cpu: 500m memory: 1Gi suggested: cpu: 1 memory: 1564Mi wazuh/elastic_stack/elasticsearch/cluster/elasticsearch-sts.yaml
-
Issue with the name of the configmap and secret for Elasticsearch. This is generated with a dynamic name, which generates a problem when trying to lift the second pod since it searches for the name: odfe-ssl-certs and elastic-odfe-conf
-
Check if it is correct that the type of services is Load Balancer for local deployments, since these are pending to be created as they are not in a cloud environment
Hello, I was doing some tests as well on the Kubernetes KinD distribution and I confirm the amount of resource suggested in the "local-env doc" was not enough for me. Indeed my pod wazuh-elasticsearch-0
was not running and was in crashloopbackoff
.
Details to reproduce
-
I follow the Wazuh Doc v4.2: Deployment on local environment.
-
I use a virtual machine with:
- 4 vCPU
- ~8Gi RAM
- On this VM I install a kubernetes distro running on top of docker: a Kubernetes KinD cluster which has 3 nodes.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
wazuh-backend-control-plane Ready control-plane,master 18h v1.21.1
wazuh-backend-worker Ready <none> 18h v1.21.1
wazuh-backend-worker2 Ready <none> 18h v1.21.1
- On this Kubernetes KinD cluster, there is a
StorageClass
of typerancher.io/local-path
that is preinstalled.
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 18h
Have a look at KinD v0.11.1 Storage provisioner. We will use this information later.
- I modify Wazuh-kubernetes's
storage-class file
inenvs/local-env/storage-class.yaml
and I add anannotations
/label and areclaimPolicy
. This step was necessary: without this annotation, thepersistentvolume
could not be created thanks to thestorageclass
.
# Wazuh StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: wazuh-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
- I add more resource to wazuh-elasticsearch: I customize the file
envs/local-env/elastic-resources.yaml
. This step is necessary: without this the podwazuh-elasticsearch-0
was not running and was incrashloopbackoff
# envs/local-env/elastic-resources.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: wazuh-elasticsearch
namespace: wazuh
spec:
replicas: 1
template:
spec:
containers:
- name: wazuh-elasticsearch
resources:
requests:
cpu: 1
memory: 1564Mi
limits:
cpu: 1
memory: 2Gi
-
I run
kubectl apply -k envs/local-env/
. -
After a couple of minutes everything looks fine.
kubectl get pods -n wazuh
NAME READY STATUS RESTARTS AGE
wazuh-elasticsearch-0 1/1 Running 0 44m
wazuh-kibana-8cf8b766b-hxz5t 1/1 Running 0 44m
wazuh-manager-master-0 1/1 Running 0 44m
wazuh-manager-worker-0 1/1 Running 0 44m
If I do not add a label in the StorageClass
and put more resources for elasticsearch (the AWS amazon/opendistro-for-elasticsearch:1.13.2
), wazuh-kubernetes does not work on K8s KinD.