wazuh-docker
wazuh-docker copied to clipboard
wazuh-odfe image without Filebeat
Hi team,
I wanted to ask if it's possible to create a docker image based on wazuh-odfe but without Filebeat. This requirement is based on the fact that if we want to connect this nodes to Splunk then we don't need that service.
Regards.
We've done this through a fork, and it proves to be very easy. Implementation should be trivial
Hello team,
It will be useful to install by default packages that are later used for the Wazuh manager, knowing we will edit the dockerfile
For instance, in order to configure a retention policy in the manager nodes, we are using cronie to rotate the archival/alerts data. We can also hardcode the configuration to establish a time to get the data removed and to avoid filling up the storage.
Regards,
The Wazuh installation test was performed on a container, which managed to run without Filebeat, and it is also verifying which directories need to be exposed so that Filebeat feeds on it from another container.
The https://github.com/wazuh/wazuh-docker/tree/506-wazuh-manager-without-filebeat branch was created, which contains the new Dockerfile for creating the image without Filebeat, and a test of agent connection to see that it works smoothly.
With the new Dockerfile, the image creation and subsequent deployment was tested, the permanent_data.env file was modified so that it does not take the Filebeat directories within the new image.
Two new Dockerfiles were created for the deployment of a container with Wazuh Manager alone and also an image with Filebeat, which will be fed from the volumes of the previous one.
They are in development within the branch https://github.com/wazuh/wazuh-docker/tree/506-wazuh-manager-without-filebeat and it is necessary to continue separating which dependencies are from Wazuh Manager, which from Filebeat and which of both.
The Wazuh Manager and Filebeat images were generated, with them a deployment was tested, creating a new version of the production-cluster.yml file to display the entire application with these new images but some errors were generated:
[cont-init.d] 2-manager: executing...
2021/09/29 20:17:47 wazuh-analysisd: CRITICAL: (1226): Error reading XML file 'etc/ossec.conf': (line 0).
wazuh-analysisd: Configuration error. Exiting
This error occurs because the main image has to change the permissions of the Wazuh Manager installation files but for some reason it is not being done, so they do not have permissions to use these files. A test was carried out connecting directly to the working container, the permissions of the files are modified and the error is no longer generated
root@wazuh-master:/var# service wazuh-manager start
Starting Wazuh v4.2.1...
wazuh-apid already running...
Started wazuh-csyslogd...
Started wazuh-dbd...
2021/09/29 21:17:54 wazuh-integratord: INFO: Remote integrations not configured. Clean exit.
Started wazuh-integratord...
Started wazuh-agentlessd...
wazuh-authd already running...
wazuh-db did not start correctly.
2021/09/29 21:17:54 wazuh-csyslogd: INFO: Remote syslog server not configured. Clean exit.
2021/09/29 21:17:54 wazuh-dbd: INFO: Database not configured. Clean exit.
2021/09/29 21:17:54 wazuh-integratord: INFO: Remote integrations not configured. Clean exit.
2021/09/29 21:17:54 wazuh-agentlessd: INFO: Not configured. Exiting.
2021/09/29 21:17:54 wazuh-db: INFO: Started (pid: 863).
2021/09/29 21:17:54 wazuh-db: CRITICAL: Unable to bind to socket 'queue/db/wdb': 'Address already in use'. Closing local server.
This is the next error that is generated, I continue with the investigation of it to see what may be causing it
Wazuh Manager was started inside a container alone with Ubuntu Focal, which weighs much less than Centos 7 and it was checked that it does not generate errors.
Filebeat image creation alone was done to complement deployment by Docker Compose.
The complete stack was deployed with all the tools (Wazuh Manager, Filebeat, Opendistro and Kibana) . Still need to perform tests with agents connected to Wazuh Manager to correctly verify the general operation.
Two new Dockerfiles were created, which generate the Wazuh Manager image alone and the Filebeat image.
A new file called production_cluster_mng_fb.yml was created which generates the stack deployment with these new images.
Agent connection test was performed and it discovers it without errors.
We are awaiting confirmation to apply this development.
An investigation was carried out to implement a solution in Kubernetes to the problem generated by the separation of Wazuh Manager and Filebeat from the wazuh-odfe container.
For the solution, a sidecar container will be implemented within the same pod, so that they can share the block volumes between both containers and thus be able to maintain communication between the tools.
The tasks of assembling the Kubernetes manifests and subsequent testing of them continue.
Changes were applied to separate Wazuh Manager and Filebeat on the 4.2 branch, which contains the changes from release 4.2.4. The 506-deploy-with-manager-and-filebeat-sidecar branch was created with the changes to test the operation of this change in containers.
The Kubernetes manifest was modified to take the Wazuh Manager and Filebeat containers, the volumes to mount were modified but at the time of deployment, the Filebeat container does not have visibility of the volumes, so it cannot lift the services .
root@vcerenu-VirtualBox:/home/vcerenu/Repositorios/wazuh-kubernetes# kubectl logs pod/wazuh-manager-master-0 -n wazuh -c wazuh-filebeat
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 0-config-filebeat: executing...
Customize Elasticsearch ouput IP
sed: can't read /etc/filebeat/filebeat.yml: No such file or directory
[cont-init.d] 0-config-filebeat: exited 2.
[cont-init.d] done.
[services.d] starting services
starting Filebeat
[services.d] done.
Exiting: error loading config file: stat /etc/filebeat/filebeat.yml: no such file or directory
Filebeat exited. code=1
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
root@vcerenu-VirtualBox:/home/vcerenu/Repositorios/wazuh-kubernetes#
The verification of this error continues
The volume mount was configured as ReadWriteMany so that the volume can be mounted in the two containers within the Wazuh Manager prod but it did not start it, the logs were reviewed but the PVC remained in Pending state
The status of the PVCs was checked and they had the following error:
root@vcerenu-VirtualBox:/home/vcerenu/Repositorios/wazuh-kubernetes# kubectl describe pvc wazuh-manager-master-wazuh-manager-master-0 -n wazuh
Name: wazuh-manager-master-wazuh-manager-master-0
Namespace: wazuh
StorageClass: wazuh-storage
Status: Pending
Volume:
Labels: app=wazuh-manager
node-type=master
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: wazuh-manager-master-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForPodScheduled 7m7s (x381 over 3h53m) persistentvolume-controller waiting for pod wazuh-manager-master-0 to be scheduled
Warning ProvisioningFailed 2m37s (x110 over 3h53m) persistentvolume-controller Failed to provision volume with StorageClass "wazuh-storage": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported
root@vcerenu-VirtualBox:/home/vcerenu/Repositorios/wazuh-kubernetes#
AWS Block Storage can only be mounted at a single point, and even if it is a single POD, it uses 2 mount points, so you cannot mount the same Block Storage for both containers
Research continues to create a storage class that can create PV with EFS, which allows it to be mounted as RWM and mounted on one or more mount points
We was investigated regarding the creation of PVs and PVCs from AWS EFS and I found a helm chart for their generation:
resource "helm_release" "efs-provisioner" {
chart = "efs-provisioner"
name = "efs-provisioner"
namespace = "default"
repository = "https://charts.helm.sh/stable"
values = [
templatefile ("templates / efs-provisioner.yaml", {
aws_region = data.aws_region.current.name,
efs_filesystem_id = var.efs_filesystem_id,
sc_name = local.efs_sc_name
})
]
version = local.efs_provisioner_version
}
I am still working on reviewing its proper functioning and its subsequent implementation within the kubernetes repository, so that the generation of PVCs can be carried out in both EBS and EFS
The creation of a new CSI driver for dynamic provisioning of EFS within the EKS cluster was verified.
The following driver was deployed: https://github.com/kubernetes-sigs/aws-efs-csi-driver
This driver has functionalities for manual and automatic provisioning, which is used for the deployment of the PVCs necessary for the Wazuh Manager pod.
A new Storage Class was created, which points to the newly deployed driver The Wazuh Manager Statefullset manifests were modified so that it uses the new Storage Class in the creation of the necessary Persistant Volume Claim, but when the PVC was created we got the following error:
Warning ProvisioningFailed 61s (x7 over 2m5s) efs.csi.aws.com_ip-192-168-58-98.us-west-1.compute.internal_38c8e958-e136-428b-9813-52cd6581565b failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
We continue to analyze what permissions may be involved in the automatic deployment of EFS that are not assigned to the EKS role used by the cluster owner.
EFS Tested Successfully. To mount these units, several steps were necessary:
- Installation of the CSI driver for EFS
- Creation of FS in EFS.
- Creation of roles and policies to access EFS for the EKS cluster.
- Port opening from the cluster to EFS.
- Creation of mount targets in the subnets of each node of the EKS cluster so that it can mount the EFS FS.
Documentation: https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html
Once all these tasks were done, several changes were made within the wazuh/wazuh-kubernetes repository, which are on the branch 506-deploy-with-manager-and-filebeat-sidecar, which include creating a new PV, PVC and a new Pod with filebeat, which shares the /var/ossec/logs directory mount with the Wazuh Manager Pod.
The entire Wazuh stack could be started with the modifications so that Manager and Filebeat run in separate containers, being generated within the same Pod by the statefulset
$ kubectl get pod -n wazuh
NAME READY STATUS RESTARTS AGE
wazuh-elasticsearch-0 1/1 Running 0 58m
wazuh-elasticsearch-1 1/1 Running 0 58m
wazuh-elasticsearch-2 1/1 Running 0 58m
wazuh-kibana-7c5d6664f8-22xzg 1/1 Running 0 58m
wazuh-manager-master-0 2/2 Running 0 58m
wazuh-manager-worker-0 2/2 Running 0 58m
wazuh-manager-worker-1 2/2 Running 0 58m
$ kubectl get statefulset -n wazuh -o wide
NAME READY AGE CONTAINERS IMAGES
wazuh-elasticsearch 3/3 58m wazuh-elasticsearch amazon/opendistro-for-elasticsearch:1.13.2
wazuh-manager-master 1/1 58m wazuh-manager,wazuh-filebeat merecu/wazuh-manager-odfe:4.2.5,merecu/filebeat-odfe:4.2.5
wazuh-manager-worker 2/2 58m wazuh-manager,wazuh-filebeat merecu/wazuh-manager-odfe:4.2.5,merecu/filebeat-odfe:4.2.5
$
All modifications are within the branch 506-deploy-with-manager-and-filebeat-sidecar
After all the research, the initial idea should not be contemplated. There is no trustable solution to this using k8s. Instead, offering a single Wazuh manager image could be done and leave the users to decide what to do with it.