E.merge is slow
I'm trying to merge a large number of events, which are mostly passive (only one or two triggers on each step.) The complexity of E.merge makes this prohibitive, since it iterates on the full list to discover the events that triggered: it's O(len all) rather than O(len triggered). [and it doesn't remove the stopped signals.]
As a user, we can manually call E.merge in a binary fashion, to recover a O(log (len all) + len triggered). It's better, but it's not great and it adds a lot of overhead -- while not performing great if a large number of events do trigger at the same time. An alternative would be to provide an unordered merge in React: the result would be accumulated in a reference, that's updated by each triggered event. Finally, the "merged" event would trigger and reset the reference for the next update step. In pseudocode:
let observe xs = E.merge (fun _ _ -> ()) () xs
let unordered_merge f z xs =
let ok = ref false in
let acc = ref z in
E.fmap
(fun () ->
if !ok
then let v = !acc in (ok := false ; acc := z ; Some v)
else None)
(observe
(List.map (E.l1 (fun x -> ok := true ; acc := f !acc x)) xs))
The ordered merge could be recovered by accumulating the (index, value), sorting them and then reducing:
let merge f z xs =
E.l1
(fun xs ->
let xs = List.sort (fun (i, _) (j, _) -> compare i j) xs in
List.fold_left (fun z (_, x) -> f z x) z xs)
(unordered_merge (fun es e -> e::es) []
(List.mapi (fun i -> E.l1 (fun x -> (i, x))) xs))
Now, if all the events generally triggers, then the current implementation is of course better since it doesn't have to sort. In my experience, this is a rare use-case: in fact, I only care about the unordered merge, since the reducing function is generally agnostic of the list order.
If the manual binary merge version is the prefered solution to this problem, then it would be nice to warn about the complexity of E.merge in the doc!
An alternative would be to provide an unordered merge in React: the result would be accumulated in a reference, that's updated by each triggered event.
Just a note, such a combinator would introduce (len all) + 1 nodes in the dep graph so if your number of events is really large that could be another kind of overhead.
I'm a little bit curious about your use case. Sometimes the solution to these problems is to simply shift more work in the primitives before turning them into events. I don't necessarily see this has bad thing, I tend to see FRP as a place where you can solve complex logical and temporal problems in a clean semantical framework and less as a place where you can solve processing problems.
It was a continuation of the collection-patching thingy, where type 'a t ~= 'a data * 'a patch event. In this case, 'a t was roughly equivalent to 'a list signal, except that it's more precise about the elements' insertion/removal times. Anyway, I needed to write concat : 'a t t -> 'a t which requires to merge the inner patchs together.