Websocket Ping / PONG sending out after every 5 mint
Hi @lws-team,
I am working on a client application where I have set the following retry policies using the API:
lws_client_connect_via_info(pCcInfo);
lws_retry_bo_t retry = { .secs_since_valid_ping = 12, /* force PINGs after 12 secs idle / .secs_since_valid_hangup = 14, / hangup after 14 secs idle */ }; Additionally, I am resetting these retry policies every 10 seconds if I find a valid probe. However, I still see PING messages being sent by libwebsockets after every 5 minutes.
i am using lws_validity_confirmed() api to reset this retry .
Could you please suggest any thing if i missed here ?
Hi @patrickshuff! I saw this was raised internally, and it looks like you may have received a resolution already, but I wanted to respond here and double-check – just in case anyone else runs into the same issue in the future. 🙂
As far as the thread I'm reading goes, it seems like the metadata was being returned as expected as seen in the payload screenshot provided above, but it just wasn't visible when inspecting the desktop client (as the client doesn't include metadata).
Is that correct? If I've gotten that wrong, please let me know, else we'll go ahead and close this issue if the issue is resolved.
I'll try and summarize the underlying issue, and some potential options for getting around this, and then my opinions on what the slack engineering team should do:
The Root Problem
By default both on the RTM based message retrieval, and the API based message retrieval, any metadata EventPayload that is sent that does not explicitly have that event metadata structure added to the application manifest is stripped out. However on both endpoints it still passes the EventType through to the caller.
Recommended Solution (not verified)
The recommended fix from slack is to add metadata for the underlying event to the slack app manifest as defined here: https://api.slack.com/automation/metadata-events
I have not verified this fix as I'm unable to do this at this time.
Potential workaround
For the API based conversation.history method there is a flag called include_all_metadata that is available in (thanks PR-1139) that you can pass that will have the server return all of the metadata. I have confirm this with the python and golang clients.
API Docs: https://api.slack.com/methods/conversations.history#arg_include_all_metadata
For the RTM based message retrieval, which I am using for my use case, you are out of luck because there is no option to pass in a flag like this.
My opinions on what slack should do
This is a super confusing and painful developer experience to pass back the EventType but strip out the EventPayload. I spent an inordinate amount of time trying multiple SDKs on both the sending and receiving side to end up at the conclusion that the server must be stripping out the payload. The behavior that I'd rather see, in order of preference:
- Pass the entire EventType and EventPayload with the messages as they're being retrieved from both the API and RTM based endpoints regardless of a valid definition. The metadata was added to the API call for a reason
- If you decide that metadata event definition is required, it would be great if you had some way to give feedback to the developer that they're trying to send an metadata of
EventTypethat is not registered and encourage them to do so instead of silently swallowing / discarding payload. e.g. on the server-side send back a HTTP400 with an error message thatEventTypeis not configured. - If you choose not to do no. 1 or no. 2, at least be consistent and strip both
EventTypeandEventPayloadso no metadata is returned so your developers aren't spending a ton of time trying to figure out why they're sending in theEventPayloadincorrectly.
Lastly you should make sure that your HTTP API and RTM based APIs are equivalent. I was elated to know there was a workaround with include_all_metadata until I went back to the code realizing we were using the RTM interface to ingest messages to the bot.
Thanks for listening. I'll go ahead and close this out.
👋 It looks like this issue has been open for 30 days with no activity. We'll mark this as stale for now, and wait 10 days for an update or for further comment before closing this issue out. If you think this issue needs to be prioritized, please comment to get the thread going again! Maintainers also review issues marked as stale on a regular basis and comment or adjust status if the issue needs to be reprioritized.
As this issue has been inactive for more than one month, we will be closing it. Thank you to all the participants! If you would like to raise a related issue, please create a new issue which includes your specific details and references this issue number.