agent icon indicating copy to clipboard operation
agent copied to clipboard

Harden the agent to be reliable against node reboot, crash, shutdown etc.

Open ghost opened this issue 4 years ago • 6 comments

Currently the agent is not reliable in certain situations and sometimes needs to be force updated or removed and re-deployed either when a node is rebooted, crashes, is drained for maintenance or is under a lot of load. This can also occur when the docker daemon is restarted.

When the above issues occur, the endpoint could show as down, or you might see an error when browsing different views in Portainer such as Failure could not retrieve images.

The agent should be made more reliable as it should handle these situations.

Additional info: The symptoms of this are discussed a lot here on the Portainer repo, but I have moved it here to be a feature request

ghost avatar Apr 20 '20 11:04 ghost

I was able to break the portainer agent, with the UI showing the swarm in a "down" state and not loading and erroring on every page, by quicking draining 2/3 nodes in a 3-node 3-manager swarm. One of the drained nodes was the leader. Problem went away after restarting the agent on the remaining node.

nivekuil avatar Dec 04 '20 07:12 nivekuil

Hi there, seems to me that demoted managers are still treated as managers, and vice-versa. The UI gets errors such as "cannot retrieve tasks, services, etc." in a config like 1 Manager + 1 Worker. I suppose that agents behind the scene are not appropriate.

keywinf avatar Oct 26 '21 09:10 keywinf

Are you running the Agent globally?

    deploy:
      mode: global
      placement:
        constraints: [node.platform.os == linux]

Which is different from Portainer, which will be running on the manager node:

    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]

Unless there's something going wrong with your overlay network...

huib-portainer avatar Oct 26 '21 21:10 huib-portainer

Yeah

keywinf avatar Oct 26 '21 22:10 keywinf

And I do not encounter this problem with two manager nodes. That's only with 1 manager and 1 worker. If I let the autorefresh on the UI, it will give a list, then a red error flash, then a list, then a red error flash, etc.

keywinf avatar Oct 26 '21 22:10 keywinf

This continues to be an issue for us with docker swarm mode. The agent keeps the old IP when a node is rebooted, either worker or manager. We run 3 managers and 3 workers. docker service update --force portainer_agent fixes it as long as all nodes stay up and do not reboot.

The issue is exacerbated by running in a cloud environment - AWS - with ephemeral private IPs. It would likely never surface if the nodes had statically assigned IPs.

yorickdowne avatar Jun 20 '22 14:06 yorickdowne