Stream audio
It works but there are different problems:
1- it spins the cpu a lot, needs investigation, this is the main problem
2- it uses wavy crate, cpal crate would've been better for android and windows support but I can't even test cpal because it doesn't work on my pc, wavy has android and windows support planned though.
3- couple of minor stuff: It doesn't handle multiple streamers correctly, and this feature should be also gated
Regarding point 1. From message-io, when the CPU goes up it could be because of the active waiting in sending methods. message-io enters into this "active waiting" when the receiver has filled its input buffer and can not receive more data until process what it already has. Usually running this in release mode is enough for reading the packet faster than a sender can send in "localhost" (and for that, any scenario). I will investigate why the receiver fills its buffer because seems related to it.
Regarding point 1. The receiver is doing a lot of work here https://github.com/lemunozm/termchat/pull/39/files#diff-87619baf236177a7a9fc38150961151b7583b6f2f082ade75915f3ac4ca49c41R303 because it is always receiving data because the sender is sending without a rate. I think the problem comes from https://github.com/lemunozm/termchat/pull/39/files#diff-35d8224790c042925dbb9a36a948deddd5111e352bde21ec476b78d3876a8576R70 that most of the cases return 0 which implies wake up the executor on the receiver.
In other words: the sender never stops sending audio (100% CPU in the sender), and the receiver collapse with this amount of messages (100% CPU in the receiver).
I think that being able to offer some kind of delay at Processing enum, like: Processing::Partial(Duration::from_millis(1)) in order to send the audio in grater chunks, would reduce the CPU at the sender and avoid saturate the receiver. I will add a PR with this architectural feature.
Pr updated.
Now its seems that the performance is acceptable(especially in release mode).
I think this should be closed now, hopefully someone does a better implementation
Hi @sigmaSd, I had no opportunity to test it. But if you consider the feature is working, I'm ok with adding it.