opamp-spec icon indicating copy to clipboard operation
opamp-spec copied to clipboard

OpAMP Agent heartbeats

Open jaronoff97 opened this issue 1 year ago • 5 comments

Currently, if an agent is functioning successfully and without any changes to its health or component status, no messages will be sent after initial handshakes. For websockets, because no messages are sent from the agent to the server, after a certain period most of these connections will idle and be shut down causing a broken socket. I propose that we add support for an heartbeat interval as part of the specification. I have already implemented this capability in the opamp-bridge where a user is able to configure a heartbeat interval to periodically set the agent's health message. HTTP theoretically gets around this with its polling interval.

This heartbeat implementation is important for the bridge where many of the events in Kubernetes happen asynchronously. The bridge is not informed directly therefore must poll state to send this message. The collector's opamp extension, however, watches for some changes but is currently not busy. For the extension, the collector will idle after sometime and prevent the server from being able to send any more messages to the extension.

I think a heartbeat interval could be optional, however, it must be communicated as part of the initial AgentToServer message. This would allow the server to know when to mark the agent as unhealthy.

Open Questions

  • Should this functionality only exist for the socket transport?
    • I believe it would be valuable for this to exist regardless of the transport and that the HTTP poll interval could be deprecated in favor of this. This would allow the server to make decisions about the liveness of the agent regardless of the transport
  • What should the default interval be? 30s?
  • Should the heartbeat interval be negotiable from the server?

I'm happy to write the spec change for this issue, but would love everyone's thoughts here.

jaronoff97 avatar Apr 05 '24 18:04 jaronoff97

I think it is reasonable for the agent to actively report a healthy heartbeat, this helps opamp server keep the information up to date.

But there's a question, what's the difference with https://github.com/open-telemetry/opamp-spec/issues/28

JaredTan95 avatar Apr 08 '24 05:04 JaredTan95

I think the main difference is that my proposal relates to heartbeats as a means of solving #28, allowing clients to automatically report health on an interval. A goal of this is definitely to keep the connection alive, but it also means that the client has a responsibility to reports its health on an interval. I can understand closing this issue in favor of that though...

jaronoff97 avatar Apr 08 '24 16:04 jaronoff97

Want to give a +1 to this, we recently encountered an issue with a misbehaving proxy that kept the OpAMP connection alive, despite the fact that the agent pod didn't exist anymore, and I believe that could have been detected with a heartbeat mechanism like this.

I do like the idea of having it be independent of transport, since it seems to be very similar conceptually to the poll interval.

There's also this PR that's been open for a while proposing a heartbeat mechanism, seems similar to what's proposed here: https://github.com/open-telemetry/opamp-spec/pull/176

BinaryFissionGames avatar Apr 09 '24 12:04 BinaryFissionGames

A crashed or misbehaving client may cause connection/goroutine leaks in the OpAMP server (open-telemetry/opamp-go#271). A heartbeat mechanism can help the server find out unresponsive peers and also defend against intermediaries (LB, proxy, network equipment) which may time out and terminate idle connections (#28).

haoqixu avatar Apr 18 '24 06:04 haoqixu

I also wanted to give a big +1 to this. The regular status reports from the opamp-bridge have been really useful in

  • providing a regular update on status
  • acting as a heartbeat
  • making sure the opamp agent checks in (especially if it's connecting over HTTP) so we can send any ServerToAgent message it needs

I agree with @BinaryFissionGames that for the sake of the OpAMP protocol, this behavior should be transport agnostic (i.e. should work for both HTTP and Websockets given both are valid transport protocols for OpAMP).

gdfast avatar May 07 '24 18:05 gdfast

Just chiming in that this does sound like a worthwile addition and that it should be the same for both transports.

My 2c would be that it is indeed useful for clients to either respect any interval negotiable from the server (or eg. a Retry-After header), or at least set a reasonable maximum polling frequency to avoid overloading the server by accident.

tpaschalis avatar Jul 03 '24 12:07 tpaschalis

@tpaschalis I have the PR up here if you can take a look!

jaronoff97 avatar Jul 03 '24 14:07 jaronoff97