capnproto
capnproto copied to clipboard
Destroying an RpcClient with outstdanding streaming calls triggers an "eventLoop != nullptr" assertion
This issue was introduced in "Fix premature cancellation of streaming calls when capability dropped." (db470e67d284a1f345d2997b7dcd12e43f6c87d8)
I have a server that can stream data to a client. When I destroy the server, I see the following happening:
-
AsyncIoContext
is destroyed, which leads to... -
LowLevelAsyncIoProviderImpl
being destroyed. -
LowLevelAsyncIoProviderImpl
owns anEventLoop
and aWaitScope
. The wait scope is destroyed first, which causes the thread local event loop to be reset. - After the
WaitScope
destructor, theEventLoop
destructor is called. TheEventLoop
destructor processes thedaemons
- Through a path that I could not reconstruct, one of the daemons calls the
RpcClient
destructor. My application never usesdetach
, so I'm not sure how this happens. Perhaps through thedetach
call atcapability.c++:852
, or the one atrpc.c++:2465
- The
RpcClient
destructor creates aTask
in aTaskSet
, which can be done only if there is a current event loop. But such loop does not exist anymore becauseWaitScope
was destroyed already - capnp crashes complaining there is no event loop
The problem can be circumvented by calling cancelAllDetached
before WaitScope
is destroyed.
I guess it might make sense to add a call to cancelAllDetached
to the destructor of LowLevelAsyncIoProviderImpl
?