Bug Report - Lack of Graceful Shutdown in Message Updates Handling
The current implementation lacks a graceful shutdown mechanism when handling message updates due to the absence of storing the offset for processed messages during shutdown.
If a cancellationToken is requested, message updates are processed, but the messageOffset does not shift. This is because the offset is committed through sending a GetUpdatesRequest. Consequently, after a shutdown, all messages from the last batch are reprocessed!
Steps to reproduce
- Configure graceful shutdown: Ensure the application doesn't close for 10 seconds after a termination request (triggered by pressing CTRL+C).
- Send some messages to the bot instance.
- Terminate the app (by pressing
CTRL+C) while processing a batch of messages.
Expected behavior
After the app termination request, the offset of processed messages is shifted. So that after restart the app, these messages won't be processed again.
Actual behavior
After the app termination request, all messages in the last batch are handled, but the offset is not shifted for them. As a result, upon restarting the application, this batch will be processed again!
Environment
NuGet Package Version: 19.0.0
I wouldn't say that this is a bug, but a specific behaviour desired by some people. I could implement an additional option to do that, but it won't be 100% reliable and have it's own can of worms. E.g. how do you think we should handle cancellation in this case since the original token is cancelled?
I see three options how to commit the last offset:
- Pass two cancellation tokens, one to cancel the core loop and another to cancel offset commit operation
- Use a timeout for cancellation operation, but what if I want to cancel the operation manually and not wait for a timeout I don't have any API to control?
- Add another method to the interfaces to all of the pollers that one can call to commit the latest offset using previously saved offset
To me the first two are horrible since they introduce not obvious lifetimes and cancellation mechanisms and only the third one seems the most sane. In the end, there is no easy solution that is ideal and satisfy everyone.
@tuscen, I suggest the following solution:
Firstly, we could introduce a check for the cancellation token within each iteration of the foreach-loop. And if a cancellation is requested, break out of the loop, ensuring that any remaining updates are not handled.
Secondly, upon cancellation, send a GetUpdatesRequest with the offset of the last successfully handled update. This ensures that the offset is appropriately committed even in the event of a graceful shutdown.
Suggested changes in code:
foreach (var update in updates)
{
try
{
await updateHandler.HandleUpdateAsync(
botClient: _botClient,
update: update,
cancellationToken: cancellationToken
).ConfigureAwait(false);
}
catch (OperationCanceledException)
{
// ignored
}
if (cancellationToken.IsCancellationRequested)
{
// Commit the last succeful offset.
var request = new GetUpdatesRequest
{
Limit = 1,
Offset = messageOffset,
};
await _botClient.MakeRequestAsync(request);
// Don't handle the rest of updates.
break;
}
messageOffset = update.Id + 1;
}
@Palindromer That's what I wrote earlier - your suggested code does not handle cancellation at all
@tuscen Could you please explain more why do you think cancellation is not adequately handled by the suggested code?
See also #1108
@Palindromer If I understand your bug report, what you're asking can be easily achieved this way:
- your app saves somewhere the last update.Id it has correctly handled
- app proceed to a graceful shutdown
- next time your app is launched, you call StartReceiving/ReceiveAsync giving a ReceiverOptions argument with Offset = lastUpdateId + 1, so you skip already processed updates.
Telegram way of committing the "read update" is certainly weird but it's the way it is. Our polling methods already have the necessary parameters to do what you're asking.
Moreover, doing an extra HTTP GetUpdateRequest after the cancellation has been requested seems weird behaviour.
Feel free to comment/reopen if you think my answers did not address your issue.