wazuh-kubernetes
wazuh-kubernetes copied to clipboard
Wazuh pod CrashLoopBackOff v4.3.6
I use version v4.3.6 I got errors below wazuh-master and wazuh-worker can't run This is my event How should I solve this problem?
[master@master wazuh-kubernetes-4.3.6]$ kubectl get pod -n=wazuh -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wazuh-dashboard-866866f498-4fh2k 1/1 Running 0 4m1s 172.16.189.67 worker2 <none> <none>
wazuh-indexer-0 1/1 Running 0 4m1s 172.16.235.135 worker1 <none> <none>
wazuh-manager-master-0 0/1 CrashLoopBackOff 4 (47s ago) 4m1s 172.16.235.157 worker1 <none> <none>
wazuh-manager-worker-0 0/1 CrashLoopBackOff 4 (<invalid> ago) 4m1s 172.16.189.109 worker2 <none> <none>
Name: wazuh-manager-master-0
Namespace: wazuh
Priority: 0
Node: worker1/192.168.0.101
Start Time: Thu, 04 Aug 2022 14:15:57 +0800
Labels: app=wazuh-manager
controller-revision-hash=wazuh-manager-master-5968bdf64d
node-type=master
statefulset.kubernetes.io/pod-name=wazuh-manager-master-0
Annotations: cni.projectcalico.org/containerID: 200ab1f81e811bc224bdddfa244eceab41ef8815b664397ca834cf0f20f1fea6
cni.projectcalico.org/podIP: 172.16.235.157/32
cni.projectcalico.org/podIPs: 172.16.235.157/32
Status: Running
IP: 172.16.235.157
IPs:
IP: 172.16.235.157
Controlled By: StatefulSet/wazuh-manager-master
Containers:
wazuh-manager:
Container ID: cri-o://1dc3f5e17d15b8291fe2ee0783022208b00701128fb8ec1820628e98274346d6
Image: wazuh/wazuh-manager:4.3.6
Image ID: docker.io/wazuh/wazuh-manager@sha256:9318fcaa843aee593e8b2e7acd70fee1e3e51e5acffc029f1db8015713f9fcdd
Ports: 1515/TCP, 1516/TCP, 55000/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 04 Aug 2022 14:20:40 +0800
Finished: Thu, 04 Aug 2022 14:20:54 +0800
Ready: False
Restart Count: 5
Limits:
cpu: 400m
memory: 512Mi
Requests:
cpu: 400m
memory: 512Mi
Environment:
INDEXER_URL: https://wazuh-indexer-0.wazuh-indexer:9200
INDEXER_USERNAME: <set to the key 'username' in secret 'indexer-cred'> Optional: false
INDEXER_PASSWORD: <set to the key 'password' in secret 'indexer-cred'> Optional: false
FILEBEAT_SSL_VERIFICATION_MODE: full
SSL_CERTIFICATE_AUTHORITIES: /etc/ssl/root-ca.pem
SSL_CERTIFICATE: /etc/ssl/filebeat.pem
SSL_KEY: /etc/ssl/filebeat.key
API_USERNAME: <set to the key 'username' in secret 'wazuh-api-cred'> Optional: false
API_PASSWORD: <set to the key 'password' in secret 'wazuh-api-cred'> Optional: false
WAZUH_CLUSTER_KEY: <set to the key 'key' in secret 'wazuh-cluster-key'> Optional: false
Mounts:
/etc/filebeat from wazuh-manager-master (rw,path="filebeat/etc/filebeat")
/etc/ssl/filebeat.key from filebeat-certs (ro,path="filebeat-key.pem")
/etc/ssl/filebeat.pem from filebeat-certs (ro,path="filebeat.pem")
/etc/ssl/root-ca.pem from filebeat-certs (ro,path="root-ca.pem")
/var/lib/filebeat from wazuh-manager-master (rw,path="filebeat/var/lib/filebeat")
/var/ossec/active-response/bin from wazuh-manager-master (rw,path="wazuh/var/ossec/active-response/bin")
/var/ossec/agentless from wazuh-manager-master (rw,path="wazuh/var/ossec/agentless")
/var/ossec/api/configuration from wazuh-manager-master (rw,path="wazuh/var/ossec/api/configuration")
/var/ossec/etc from wazuh-manager-master (rw,path="wazuh/var/ossec/etc")
/var/ossec/integrations from wazuh-manager-master (rw,path="wazuh/var/ossec/integrations")
/var/ossec/logs from wazuh-manager-master (rw,path="wazuh/var/ossec/logs")
/var/ossec/queue from wazuh-manager-master (rw,path="wazuh/var/ossec/queue")
/var/ossec/var/multigroups from wazuh-manager-master (rw,path="wazuh/var/ossec/var/multigroups")
/var/ossec/wodles from wazuh-manager-master (rw,path="wazuh/var/ossec/wodles")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrzcr (ro)
/wazuh-config-mount/etc/authd.pass from wazuh-authd-pass (ro,path="authd.pass")
/wazuh-config-mount/etc/ossec.conf from config (ro,path="master.conf")
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
wazuh-manager-master:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: wazuh-manager-master-wazuh-manager-master-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: wazuh-conf-9hf9g2fgk8
Optional: false
filebeat-certs:
Type: Secret (a volume populated by a Secret)
SecretName: indexer-certs-5mhg7mhbfh
Optional: false
wazuh-authd-pass:
Type: Secret (a volume populated by a Secret)
SecretName: wazuh-authd-pass
Optional: false
kube-api-access-wrzcr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: tul=wazuh.master
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m58s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 6m57s default-scheduler Successfully assigned wazuh/wazuh-manager-master-0 to worker1
Normal Pulling 6m54s kubelet Pulling image "wazuh/wazuh-manager:4.3.6"
Normal Pulled 6m34s kubelet Successfully pulled image "wazuh/wazuh-manager:4.3.6" in 20.484118242s
Normal Created 3m58s (x5 over 6m34s) kubelet Created container wazuh-manager
Normal Started 3m58s (x5 over 6m34s) kubelet Started container wazuh-manager
Normal Pulled 3m58s (x4 over 6m14s) kubelet Container image "wazuh/wazuh-manager:4.3.6" already present on machine
Warning BackOff 105s (x15 over 5m53s) kubelet Back-off restarting failed container
Name: wazuh-manager-worker-0
Namespace: wazuh
Priority: 0
Node: worker2/192.168.0.102
Start Time: Thu, 04 Aug 2022 22:02:32 +0800
Labels: app=wazuh-manager
controller-revision-hash=wazuh-manager-worker-597dc7fbc9
node-type=worker
statefulset.kubernetes.io/pod-name=wazuh-manager-worker-0
Annotations: cni.projectcalico.org/containerID: ddd94d4bea70d0f0bebd79f57d94a5b755abd4896dc450693c0e9ff5eda36ecb
cni.projectcalico.org/podIP: 172.16.189.109/32
cni.projectcalico.org/podIPs: 172.16.189.109/32
Status: Running
IP: 172.16.189.109
IPs:
IP: 172.16.189.109
Controlled By: StatefulSet/wazuh-manager-worker
Containers:
wazuh-manager:
Container ID: cri-o://9a3b71a2e22da904ba5752020f18b86516956d39dfec2a8067380affbc025f56
Image: wazuh/wazuh-manager:4.3.6
Image ID: docker.io/wazuh/wazuh-manager@sha256:9318fcaa843aee593e8b2e7acd70fee1e3e51e5acffc029f1db8015713f9fcdd
Ports: 1514/TCP, 1516/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 04 Aug 2022 22:07:03 +0800
Finished: Thu, 04 Aug 2022 22:07:09 +0800
Ready: False
Restart Count: 5
Limits:
cpu: 400m
memory: 512Mi
Requests:
cpu: 400m
memory: 512Mi
Environment:
INDEXER_URL: https://wazuh-indexer-0.wazuh-indexer:9200
INDEXER_USERNAME: <set to the key 'username' in secret 'indexer-cred'> Optional: false
INDEXER_PASSWORD: <set to the key 'password' in secret 'indexer-cred'> Optional: false
FILEBEAT_SSL_VERIFICATION_MODE: full
SSL_CERTIFICATE_AUTHORITIES: /etc/ssl/root-ca.pem
SSL_CERTIFICATE: /etc/ssl/filebeat.pem
SSL_KEY: /etc/ssl/filebeat.key
WAZUH_CLUSTER_KEY: <set to the key 'key' in secret 'wazuh-cluster-key'> Optional: false
Mounts:
/etc/filebeat from wazuh-manager-worker (rw,path="filebeat/etc/filebeat")
/etc/ssl/filebeat.key from filebeat-certs (ro,path="filebeat-key.pem")
/etc/ssl/filebeat.pem from filebeat-certs (ro,path="filebeat.pem")
/etc/ssl/root-ca.pem from filebeat-certs (ro,path="root-ca.pem")
/var/lib/filebeat from wazuh-manager-worker (rw,path="filebeat/var/lib/filebeat")
/var/ossec/active-response/bin from wazuh-manager-worker (rw,path="wazuh/var/ossec/active-response/bin")
/var/ossec/agentless from wazuh-manager-worker (rw,path="wazuh/var/ossec/agentless")
/var/ossec/api/configuration from wazuh-manager-worker (rw,path="wazuh/var/ossec/api/configuration")
/var/ossec/etc from wazuh-manager-worker (rw,path="wazuh/var/ossec/etc")
/var/ossec/integrations from wazuh-manager-worker (rw,path="wazuh/var/ossec/integrations")
/var/ossec/logs from wazuh-manager-worker (rw,path="wazuh/var/ossec/logs")
/var/ossec/queue from wazuh-manager-worker (rw,path="wazuh/var/ossec/queue")
/var/ossec/var/multigroups from wazuh-manager-worker (rw,path="wazuh/var/ossec/var/multigroups")
/var/ossec/wodles from wazuh-manager-worker (rw,path="wazuh/var/ossec/wodles")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ql6hr (ro)
/wazuh-config-mount/etc/ossec.conf from config (ro,path="worker.conf")
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
wazuh-manager-worker:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: wazuh-manager-worker-wazuh-manager-worker-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: wazuh-conf-9hf9g2fgk8
Optional: false
filebeat-certs:
Type: Secret (a volume populated by a Secret)
SecretName: indexer-certs-5mhg7mhbfh
Optional: false
kube-api-access-ql6hr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: tul=wazuh.worker
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m58s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 6m56s default-scheduler Successfully assigned wazuh/wazuh-manager-worker-0 to worker2
Normal Pulling <invalid> kubelet Pulling image "wazuh/wazuh-manager:4.3.6"
Normal Pulled <invalid> kubelet Successfully pulled image "wazuh/wazuh-manager:4.3.6" in 49.898310271s
Normal Created <invalid> (x5 over <invalid>) kubelet Created container wazuh-manager
Normal Started <invalid> (x5 over <invalid>) kubelet Started container wazuh-manager
Normal Pulled <invalid> (x4 over <invalid>) kubelet Container image "wazuh/wazuh-manager:4.3.6" already present on machine
Warning BackOff <invalid> (x16 over <invalid>) kubelet Back-off restarting failed container
@nuu9323226
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m58s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
I think you not deploy the k8s on AWS, your master/worker pods will not bound to the right PersistentVolume.
You should create you own StorageClass such as nfs-provisioner.
@LubinLew @Wazuh
I have followed the steps exactly described wazuh-kubernetes-reopository, generated certificates and changed storageclass to nfs-provisioner, the wazuh-pods are still in crashloopbackoff state. Before changing storageclass to nfs-provisioner i have deployed nfs-subdir-external-provisioner in kubernetes.
@chasegame-alpha
If you want to use nfs-provisioner, you need a NFS server first.
# example on centos7
yum install -y nfs-utils
mkdir -p /opt/k8s
echo "/opt/k8s *(rw,async,insecure,no_subtree_check,no_root_squash)" > /etc/exports
systemctl enable nfs
systemctl start nfs
-
configure
StorageClassname towazuh-storagein class.yaml -
configure NFS(hostname/path) indeployment.yaml.
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.3.243.101
- name: NFS_PATH
value: /opt/k8s
volumes:
- name: nfs-client-root
nfs:
server: 10.3.243.101
path: /opt/k8s
@LubinLew i have already nfs server running, while deploying nfs-subdir-external-provisioner i have given the ip of nfs server and path to use the path for dynamic provisioning. I have deployed it through helm at the run time i have given the ip and path. The below are the files i have changed to use nfs-client storageclass. StorageClass.yaml file
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME' #parameters: #archiveOnDelete: "false"
volumeclaimtemplates:
volumeClaimTemplates: - metadata: name: wazuh-manager-master namespace: wazuh spec: accessModes: - ReadWriteOnce storageClassName: nfs-client resources: requests: storage: 500Mi But the wazuh pods are crashloopbackoff with these logs:
logs of wazuh-manager-master pod
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 0-wazuh-init: executing...
/var/ossec/data_tmp/permanent/var/ossec/api/configuration/
The path /var/ossec/api/configuration is already mounted
/var/ossec/data_tmp/permanent/var/ossec/etc/
The path /var/ossec/etc is already mounted
/var/ossec/data_tmp/permanent/var/ossec/logs/
The path /var/ossec/logs is already mounted
/var/ossec/data_tmp/permanent/var/ossec/queue/
The path /var/ossec/queue is already mounted
/var/ossec/data_tmp/permanent/var/ossec/agentless/
The path /var/ossec/agentless is empty, skiped
/var/ossec/data_tmp/permanent/var/ossec/var/multigroups/
The path /var/ossec/var/multigroups is empty, skiped
/var/ossec/data_tmp/permanent/var/ossec/integrations/
The path /var/ossec/integrations is empty, skiped
/var/ossec/data_tmp/permanent/var/ossec/active-response/bin/
The path /var/ossec/active-response/bin is empty, skiped
/var/ossec/data_tmp/permanent/var/ossec/wodles/
The path /var/ossec/wodles is already mounted
/var/ossec/data_tmp/permanent/etc/filebeat/
The path /etc/filebeat is already mounted
Updating /var/ossec/etc/internal_options.conf
Error executing command: 'cp -p /var/ossec/data_tmp/exclusion//var/ossec/etc/internal_options.conf /var/ossec/etc/internal_options.conf'.
Exiting.
[cont-init.d] 0-wazuh-init: exited 1.
[cont-init.d] 1-config-filebeat: executing...
Customize Elasticsearch ouput IP
Configuring username.
Configuring password.
Configuring SSL verification mode.
Configuring Certificate Authorities.
Configuring SSL Certificate.
Configuring SSL Key.
chown: changing ownership of '/etc/filebeat/filebeat.yml': Operation not permitted
[cont-init.d] 1-config-filebeat: exited 0.
[cont-init.d] 2-manager: executing...
Traceback (most recent call last):
File "/var/ossec/framework/scripts/create_user.py", line 72, in
these are the logs wazuh-manager-worker pod:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 0-wazuh-init: executing... /var/ossec/data_tmp/permanent/var/ossec/api/configuration/ The path /var/ossec/api/configuration is already mounted /var/ossec/data_tmp/permanent/var/ossec/etc/ The path /var/ossec/etc is already mounted /var/ossec/data_tmp/permanent/var/ossec/logs/ The path /var/ossec/logs is already mounted /var/ossec/data_tmp/permanent/var/ossec/queue/ The path /var/ossec/queue is already mounted /var/ossec/data_tmp/permanent/var/ossec/agentless/ The path /var/ossec/agentless is empty, skiped /var/ossec/data_tmp/permanent/var/ossec/var/multigroups/ The path /var/ossec/var/multigroups is empty, skiped /var/ossec/data_tmp/permanent/var/ossec/integrations/ The path /var/ossec/integrations is empty, skiped /var/ossec/data_tmp/permanent/var/ossec/active-response/bin/ The path /var/ossec/active-response/bin is empty, skiped /var/ossec/data_tmp/permanent/var/ossec/wodles/ The path /var/ossec/wodles is already mounted /var/ossec/data_tmp/permanent/etc/filebeat/ The path /etc/filebeat is already mounted Updating /var/ossec/etc/internal_options.conf Error executing command: 'cp -p /var/ossec/data_tmp/exclusion//var/ossec/etc/internal_options.conf /var/ossec/etc/internal_options.conf'. Exiting. [cont-init.d] 0-wazuh-init: exited 1. [cont-init.d] 1-config-filebeat: executing... Customize Elasticsearch ouput IP Configuring username. Configuring password. Configuring SSL verification mode. Configuring Certificate Authorities. Configuring SSL Certificate. Configuring SSL Key. chown: changing ownership of '/etc/filebeat/filebeat.yml': Operation not permitted [cont-init.d] 1-config-filebeat: exited 0. [cont-init.d] 2-manager: executing... 2023/02/13 11:14:20 wazuh-analysisd: ERROR: Could not change the group to 'wazuh': 1 2023/02/13 11:14:20 wazuh-analysisd: CRITICAL: (1202): Configuration error at 'etc/ossec.conf'. wazuh-analysisd: Configuration error. Exiting [cont-init.d] 2-manager: exited 1. [cont-init.d] done. [services.d] starting services 2023/02/13 10:42:25 wazuh-integratord: ERROR: (1103): Could not open file 'etc/internal_options.conf' due to [(2)-(No such file or directory)]. 2023/02/13 10:42:25 wazuh-integratord: CRITICAL: (2301): Definition not found for: 'integrator.debug'. 2023/02/13 10:44:03 wazuh-analysisd: ERROR: Could not change the group to 'wazuh': 1 2023/02/13 10:44:03 wazuh-analysisd: CRITICAL: (1202): Configuration error at 'etc/ossec.conf'. 2023/02/13 10:46:28 wazuh-analysisd: ERROR: Could not change the group to 'wazuh': 1 2023/02/13 10:46:28 wazuh-analysisd: CRITICAL: (1202): Configuration error at 'etc/ossec.conf'. 2023/02/13 11:14:04 wazuh-analysisd: ERROR: Could not change the group to 'wazuh': 1 2023/02/13 11:14:04 wazuh-analysisd: CRITICAL: (1202): Configuration error at 'etc/ossec.conf'. 2023/02/13 11:14:20 wazuh-analysisd: ERROR: Could not change the group to 'wazuh': 1 2023/02/13 11:14:20 wazuh-analysisd: CRITICAL: (1202): Configuration error at 'etc/ossec.conf'. [services.d] done. starting Filebeat Exiting: error loading config file: config file ("/etc/filebeat/filebeat.yml") must be owned by the user identifier (uid=0) or root Filebeat exited. code=1 [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting.
any help regarding this? Thanks in advance.
@chasegame-alpha
All dirs mounted, StorgeClass workes. Maybe just the permissions issue ?
Everything I do is replace the wazh-storage to nfs-provisioner.
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-client-provisioner-745cc5684-jwssj 1/1 Running 0 51m
kube-system coredns-5bbd96d687-cwjpf 1/1 Running 0 51m
kube-system coredns-5bbd96d687-dtn48 1/1 Running 0 51m
kube-system etcd-k8s 1/1 Running 7 52m
kube-system kube-apiserver-k8s 1/1 Running 1 52m
kube-system kube-controller-manager-k8s 1/1 Running 1 52m
kube-system kube-proxy-q768x 1/1 Running 0 51m
kube-system kube-scheduler-k8s 1/1 Running 7 52m
wazuh wazuh-dashboard-6755c6b9f8-85pj4 1/1 Running 0 50m
wazuh wazuh-indexer-0 1/1 Running 0 50m
wazuh wazuh-indexer-1 1/1 Running 0 36m
wazuh wazuh-indexer-2 1/1 Running 0 35m
wazuh wazuh-manager-master-0 1/1 Running 0 50m
wazuh wazuh-manager-worker-0 1/1 Running 0 50m
wazuh wazuh-manager-worker-1 1/1 Running 0 50m
$ kubectl logs -f wazuh-manager-master-0 -n wazuh
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 0-wazuh-init: executing...
/var/ossec/data_tmp/permanent/var/ossec/api/configuration/
Installing /var/ossec/api/configuration
/var/ossec/data_tmp/permanent/var/ossec/etc/
Installing /var/ossec/etc
/var/ossec/data_tmp/permanent/var/ossec/logs/
Installing /var/ossec/logs
/var/ossec/data_tmp/permanent/var/ossec/queue/
Installing /var/ossec/queue
/var/ossec/data_tmp/permanent/var/ossec/agentless/
The path /var/ossec/agentless is empty, skiped
/var/ossec/data_tmp/permanent/var/ossec/var/multigroups/
The path /var/ossec/var/multigroups is empty, skiped
/var/ossec/data_tmp/permanent/var/ossec/integrations/
The path /var/ossec/integrations is empty, skiped
/var/ossec/data_tmp/permanent/var/ossec/active-response/bin/
The path /var/ossec/active-response/bin is empty, skiped
/var/ossec/data_tmp/permanent/var/ossec/wodles/
Installing /var/ossec/wodles
/var/ossec/data_tmp/permanent/etc/filebeat/
Installing /etc/filebeat
Updating /var/ossec/etc/internal_options.conf
Updating /var/ossec/integrations/pagerduty
Updating /var/ossec/integrations/slack
Updating /var/ossec/integrations/slack.py
Updating /var/ossec/integrations/virustotal
Updating /var/ossec/integrations/virustotal.py
Updating /var/ossec/active-response/bin/default-firewall-drop
Updating /var/ossec/active-response/bin/disable-account
Updating /var/ossec/active-response/bin/firewalld-drop
Updating /var/ossec/active-response/bin/firewall-drop
Updating /var/ossec/active-response/bin/host-deny
Updating /var/ossec/active-response/bin/ip-customblock
Updating /var/ossec/active-response/bin/ipfw
Updating /var/ossec/active-response/bin/kaspersky.py
Updating /var/ossec/active-response/bin/kaspersky
Updating /var/ossec/active-response/bin/npf
Updating /var/ossec/active-response/bin/wazuh-slack
Updating /var/ossec/active-response/bin/pf
Updating /var/ossec/active-response/bin/restart-wazuh
Updating /var/ossec/active-response/bin/restart.sh
Updating /var/ossec/active-response/bin/route-null
Updating /var/ossec/agentless/sshlogin.exp
Updating /var/ossec/agentless/ssh_pixconfig_diff
Updating /var/ossec/agentless/ssh_asa-fwsmconfig_diff
Updating /var/ossec/agentless/ssh_integrity_check_bsd
Updating /var/ossec/agentless/main.exp
Updating /var/ossec/agentless/su.exp
Updating /var/ossec/agentless/ssh_integrity_check_linux
Updating /var/ossec/agentless/register_host.sh
Updating /var/ossec/agentless/ssh_generic_diff
Updating /var/ossec/agentless/ssh_foundry_diff
Updating /var/ossec/agentless/ssh_nopass.exp
Updating /var/ossec/agentless/ssh.exp
Updating /var/ossec/wodles/utils.py
Updating /var/ossec/wodles/aws/aws-s3
Updating /var/ossec/wodles/aws/aws-s3.py
Updating /var/ossec/wodles/azure/azure-logs
Updating /var/ossec/wodles/azure/azure-logs.py
Updating /var/ossec/wodles/docker/DockerListener
Updating /var/ossec/wodles/docker/DockerListener.py
Updating /var/ossec/wodles/gcloud/gcloud
Updating /var/ossec/wodles/gcloud/gcloud.py
Updating /var/ossec/wodles/gcloud/integration.py
Updating /var/ossec/wodles/gcloud/tools.py
find: '/proc/336/task/336/fd/5': No such file or directory
find: '/proc/336/task/336/fdinfo/5': No such file or directory
find: '/proc/336/fd/6': No such file or directory
find: '/proc/336/fdinfo/6': No such file or directory
find: '/proc/337/task/337/fd/5': No such file or directory
find: '/proc/337/task/337/fdinfo/5': No such file or directory
find: '/proc/337/fd/6': No such file or directory
find: '/proc/337/fdinfo/6': No such file or directory
Identified Wazuh configuration files to mount...
'/wazuh-config-mount/etc/ossec.conf' -> '/var/ossec/etc/ossec.conf'
'/wazuh-config-mount/etc/authd.pass' -> '/var/ossec/etc/authd.pass'
[cont-init.d] 0-wazuh-init: exited 0.
[cont-init.d] 1-config-filebeat: executing...
Customize Elasticsearch ouput IP
Configuring username.
Configuring password.
Configuring SSL verification mode.
Configuring Certificate Authorities.
Configuring SSL Certificate.
Configuring SSL Key.
[cont-init.d] 1-config-filebeat: exited 0.
[cont-init.d] 2-manager: executing...
2023/02/16 06:58:56 wazuh-modulesd: WARNING: The <ignore_time> tag at module 'vulnerability-detector' is deprecated for version newer than 4.3.
Starting Wazuh v4.3.10...
Started wazuh-apid...
Started wazuh-csyslogd...
Started wazuh-dbd...
2023/02/16 06:59:13 wazuh-integratord: INFO: Remote integrations not configured. Clean exit.
Started wazuh-integratord...
Started wazuh-agentlessd...
Started wazuh-authd...
Started wazuh-db...
Started wazuh-execd...
Started wazuh-analysisd...
2023/02/16 06:59:18 wazuh-syscheckd: WARNING: The check_unixaudit option is deprecated in favor of the SCA module.
Started wazuh-syscheckd...
Started wazuh-remoted...
Started wazuh-logcollector...
Started wazuh-monitord...
2023/02/16 06:59:22 wazuh-modulesd: WARNING: The <ignore_time> tag at module 'vulnerability-detector' is deprecated for version newer than 4.3.
Started wazuh-modulesd...
Started wazuh-clusterd...
Completed.
[cont-init.d] 2-manager: exited 0.
[cont-init.d] done.
[services.d] starting services
2023/02/16 06:59:23 wazuh-modulesd:database: INFO: Module started.
2023/02/16 06:59:23 wazuh-modulesd:control: INFO: Starting control thread.
2023/02/16 06:59:23 wazuh-modulesd:task-manager: INFO: (8200): Module Task Manager started.
2023/02/16 06:59:24 wazuh-remoted: INFO: (1410): Reading authentication keys file.
2023/02/16 06:59:25 wazuh-modulesd:syscollector: INFO: Module started.
2023/02/16 06:59:26 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2023/02/16 06:59:28 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended.
2023/02/16 06:59:28 wazuh-analysisd: INFO: Total rules enabled: '6327'
2023/02/16 06:59:28 wazuh-analysisd: INFO: The option <queue_size> is deprecated and won't apply. Set up each queue size in the internal_options file.
2023/02/16 06:59:29 wazuh-analysisd: INFO: Started (pid: 524).
starting Filebeat
[services.d] done.