web-audio-api-rs
web-audio-api-rs copied to clipboard
Free audio processor resources when the node has stopped/ended and cannot be started again
Currently, we drop the AudioProcessor when both:
- the control thread has dropped the corresponding AudioNode
- the processor has no inputs and tail-time = false
But we can release the resources earlier, e.g. when
- an AudioBufferSourceNode has ended and does not loop
- a ConstanceSourceNode has stopped
Take care, since the control thread still has the AudioNode they could call connect/disconnect/etc on it. This means we should not recycle the AudioNodeId immediately
Cool to re-open this discussion, I really think we have a lot of room for perf improvements in handling the audio graph more smartly (...but that's quite/really tricky for sure...)
an AudioBufferSourceNode has ended and does not loop
I don't really get that, if the AudioBufferSourceNode is in a loop, it's not ended and the tail time should remain true unless stop is called explicitly (and if it is not the case, I would consider it an issue ...and my fault actually :)
a
ConstantSourceNodehas stopped
This is true for all AudioScheduledSourceNode I think
Related note regarding #396 and AudioBufferSourceNode, because I'm not really sure about what we do in the case:
- We have a buffer of 10mn long
- We start the source
- After 1sec, we
disconnect()the node and drop it but without callingstop()explicitly...
Do the renderer continue to hardly compute things that cannot be heard anymore? (I'm actually afraid this is the case, and maybe that could explain the issue)
...And we could just imagine the exact same setup with an OscillatorNode or a ConstantSourceNode, which would be actually worse because they would render forever (as tail_time == true unless stop is explicitly called on these nodes...)
If I'm not mistaking in the first place, maybe a new rule for AudioScheduledSourceNodes could be: "if disconnected and dropped on main thread (so no possibility to reconnect anymore), tail time is false whatever the renderer pretends"
And then (needs review) maybe this could be generalized to:
We drop the AudioProcessor also when both:
- the control thread has dropped the corresponding AudioNode
- the processor has no output
Yes, all valid points.
Do the renderer continue to hardly compute things that cannot be heard anymore?
Reading the spec literally, yes it should continue processing regardless if the output are used. https://webaudio.github.io/web-audio-api/#rendering-loop - Step 4 requires to consider all nodes
Now common sense would make you think we don't need to render pieces of the audio graph that are not connected to the destination node, but this is only true when none of these nodes have observable side effects
Nodes with observable side effects include
- destination node, obviously
- analyser node
- media stream destination node
- user provided worket nodes (could ship audio over the network for example)
- delay node (e.g buffer signals for a long time, and then connect the node to the destination)
- ...
As you can see this becomes quite hairy easily. But maybe the browser implementation don't care and only traverse backwards from the destination node?
That said we could easily drop orphaned source nodes, it would be a nice start.
Hehe, ok, but can you actually observe any side effect if you don't have any handle to the node, and then no way to observe any side effect?
Ah true, disregard delay node and analyser node
[edited to remove noise :)]
As you can see this becomes quite hairy easily.
Yes, I agree this is kind of nightmare to think about
That said we could easily drop orphaned source nodes, it would be a nice start.
Yup sure!
I can already think of another problematic case I think, but probably best to clearly define what observable side effects means beforehand