wazuh-helm icon indicating copy to clipboard operation
wazuh-helm copied to clipboard

[Bug] An agent deployed in another cluster does not connect to Wazuh

Open kinseii opened this issue 1 month ago • 3 comments

The LoadBalancer Wazuh Manager service is not working correctly. Wazuh Agent on another cluster loses connection after registration. If the connection goes to worker nodes, everything works fine. If it goes to the master node, it disconnects.

If the Master and Workers are separated onto different LoadBalancers, everything works fine:

wazuh:
  loadBalancer:
    enabled: false

  master:
    service:
      type: LoadBalancer
      ports:
        - name: registration
          port: 1515
          targetPort: 1515
        - name: api
          port: 55000
          targetPort: 55000
    networkPolicy:
      enabled: true
      extraIngresses:
        - ports:
            - port: 1515
              protocol: TCP
            - port: 55000
              protocol: TCP
          from: []

  worker:
    service:
      type: LoadBalancer
      ports:
        - name: events
          port: 1514
          targetPort: 1514

The manager needs to be configured so that it can handle requests from both services.

kinseii avatar Nov 24 '25 13:11 kinseii

You can do that in Network Policies. Example attached

pvy-security-wazuh-manager-master.yaml

pvyswiss avatar Nov 28 '25 07:11 pvyswiss

But honestly, I still have the same issue, but working with Newt Proxy

pvyswiss avatar Nov 28 '25 07:11 pvyswiss

But honestly, I still have the same issue, but working with Newt Proxy

I apologize, I don't understand. So, if both the master and worker are working through a single LoadBalancer for the manager, does everything work with your Network Policy configuration?

The thing is, I checked the Network Policy in this configuration:

wazuh:
  loadBalancer:
    enabled: true

  master:
    service:
      type: ClusterIP
      ports:
        - name: registration
          port: 1515
          targetPort: 1515
        - name: api
          port: 55000
          targetPort: 55000
    networkPolicy:
      enabled: true
      extraIngresses:
        - ports:
            - port: 1515
              protocol: TCP
            - port: 55000
              protocol: TCP
          from: []

  worker:
    service:
      type: ClusterIP
      ports:
        - name: events
          port: 1514
          targetPort: 1514

Through the LoadBalancer manager, all ports are accessible to the agent from another node, I checked it through netcat. But the agents do not connect. As I understand it, requests that should only be accepted by the worker sometimes go to the master, which prevents the agent from connecting.

If I create different LoadBalancers for the master and worker, and disable the LoadBalancer for the manager, as I wrote in my first message, and also specify different IPs/hosts for the master and worker in the agent, then everything works — the agents connect and transmit data.

kinseii avatar Dec 01 '25 06:12 kinseii