gtk-rs-core
gtk-rs-core copied to clipboard
Add closure variant of `SignalBuilder::class_handler()`, use `Value` traits for `accumulator()`
accumulator()
should probably also be based on std::ops::ControlFlow
at this point.
This one is not so easy because of the SignalClassHandlerToken
we have to safely allow chaining up
Class handler is: Fn(&SignalClassHandlerToken, &[Value]) -> Option<Value> + Send + Sync + 'static
If we make this a GClosure
then somehow the token has to be passed in there.
@jf2048 Do you maybe have an idea?
Or @bilelmoussaoui @GuillaumeGomez ? :)
Accumulator is: Fn(&SignalInvocationHint, &mut Value, &Value) -> bool + Send + Sync + 'static
This has no GClosure
API but that's fine, we can directly make this a T: FromValue + ToValue, Fn(&SignalInvocationHint, T) -> ControlFlow<Value, T> + Send + Sync + 'static
Sorry, I missed this before. I can't think of a good way to do this without creating a second Closure
to wrap the one passed in. If we want to avoid that maybe we have to accept that using using a Closure
for the handler and chaining is unsafe. The easier way to do this and still get the value unwrapping is to have a real signals macro, which I have implemented and it works the way you would expect. Plus it has the added benefit of not needing to type the argument types twice. The accumulator changes look good to me although I think that should be ControlFlow<T, T>
.
BTW it seems it may be possible to remove SignalClassHandlerToken
and replace it with a thread local that stores a stack of the current signal emissions, similar to what gobject is actually doing underneath the hood. Although that would still not get rid of the need to wrap the Closure
, because it also has to be set up before calling the user's closure (just like the token).
Also the way this is implemented right now makes no sense to me, the top-level class closure (registered with g_signal_newv
) definitely doesn't need the token, so that can be removed from class_handler
You need that token to pass it along for chaining up as otherwise you could chain up from everywhere and that's not allowed
I don't think that is any worse than being allowed to call any of the parent_*
functions from everywhere which you can technically do right now. With a thread local tracking it we can at least still detect when that's called outside of a class handler and panic
Thread local tracking has runtime performance impact though
I don't think it should have any more performance impact than what is happening now, it's essentially just taking the token and putting it on a thread local. It should still be better than the code inside gobject which always needs to lock the global mutex on all signal emissions :frowning_face:
What I mean though is you never chain up from the class handler, only from an overridden class handler. It's not needed in Signal::class_handler
, only in ObjecClassSubclassExt::override_signal_class_handler
. I think it is safe to just make a version of class_handler
that takes a closure and drop the token from there, if that is good enough.
My main issue with getting rid of the token is that you lose some additional compile time checks that only turn into runtime panics. If we stick with the token there are some changes that should be probably be made, e.g.
- The token should probably be pinned and get consumed by calling
signal_chain_from_overridden
, right now you can call that multiple times on the same token which is bad - There should probably be some way to change the arguments when chaining up because this is desirable in some cases, but this requires type checking the
Value
s at run time to be type safe. Eventually in a signals macro I'd like to autogenerate achain_*
function for overridden handlers, and that won't need type checking
I don't think it should have any more performance impact than what is happening now
Yeah you're probably right. I'm not opposed to changing this in any case if it solves some actual problem :)
I think it is safe to just make a version of
class_handler
that takes a closure and drop the token from there, if that is good enough.
I think you're right, that must've been some confusion that caused me to end up with the current API :)