No input in `feedback` and `feedback_interleaved`
Summary
So the feedback examples from 0.12.0 are not working - there's no audio data captured (mic device is opened and there's an orange dot indicator in macOS status bar):
- Create new project with
cargo new --bin - Add
coreaudio-rs = "0.12.0"to dependencies in Cargo.toml - Add https://github.com/RustAudio/coreaudio-rs/blob/b671130aca0e3eef25e194e2bfa66a8ad58a8829/examples/feedback.rs or https://github.com/RustAudio/coreaudio-rs/blob/b671130aca0e3eef25e194e2bfa66a8ad58a8829/examples/feedback_interleaved.rs to
src/main.rs - You'll get all the
output cb {} framesand noinput cb {} frameslogs
Info
- macOS:
14.5 (23F79) - coreaudio-rs:
0.12.0
Hi! Thanks for the report! While I am kinda the active maintainer of this repo, I'll admit that I don't know a ton about the feedback examples. Looking at the git history, @HEnquist (sorry for the tag, feel free to tell me it's unwanted) authored those examples 3 years ago. 3 years is a bit of time but maybe Henrik has got some ideas.
My intuition is that macOS has changed since #83. Similarly, iOS 17 has AVSession issues in certain cases. If you were to debug this more and submit a PR, it'd be very welcome.
I'll take a look. I'm not aware of any changes in macOS that cause trouble and I have not needed to make any changes to my own projects to run on the latest macOS versions. I have no idea about iOS.
It's quite common to struggle with permissions. Have you allowed the terminal app to access the microphone?
I just tried the feedback example and it works fine here on Sonoma.
This example is perhaps a bit too simple, it runs at 44.1 kHz but it does not try to switch the capture device to this sample rate. If the default capture device (likely the built in microphone) is set to another rate, the input callback won't get called.
So, open Audio MIDI Setup and make sure that both the default capture and playback device is set to 44.1 kHz. Or use another value, and adjust the SAMPLE_RATE variable in the example accordingly.
Seems like SAMPLE_RATE const may be replaced with interactive querying of input device sample rate:
let sample_rate: f64 = input_audio_unit.get_property::<f64>(
kAudioUnitProperty_SampleRate,
Scope::Input,
Element::Input,
)?;
then using it in in_stream_format and out_stream_format structs, or setting kAudioUnitProperty_SampleRate manually before starting output audio unit:
output_audio_unit.set_property(
kAudioUnitProperty_SampleRate,
Scope::Input,
Element::Output,
Some(&sample_rate),
)?;
The biggest problem with these examples is that they're not "just works" examples. It depends on luck (if your default input device sample rate is set to 44100.0), or you know just enough to make a fix yourself - this probably should be changed or directly pointed out in comments of these examples?
Anyway thanks, it works now! Also fyi @simlay, a bit of UX feedback:
-
AudioUnit::set_sample_rate()is a bit misleading - it usesScope::InputandElement::Outputby default, which prevents setting sample rates onElement::Input(a default mic) for example. I know that you can't changeScope::InputonElement::Input(hardware mic) andScope::OutputonElement::Output(hardware speakers), and it probably has it's reasons, but just usingset_sample_rate()on a mic device, getting your changes done, seeing via logging it's done and getting no actual difference in audio playback and behavior is misleading - Are there any plans/work done on
coreaudio-rsto support any Audio Graph API? I can't figure out how to convert sample rate on my hardware, is it really possible without directcoreaudio-sysGraph API usage? I mean I needAudioUnit::new(FormatConverterType::AUConverter)and:- either pass data from
input_audio_unit.set_input_callbacktoconvert_audio_unit.set_render_callback, and fromconvert_audio_unit.set_input_callbacktooutput_audio_unit.set_render_callback(this is not possible, can't createconvert_audio_unit.set_input_callback) - or make an AUGraph, chain devices
input_audio_unit -> convert_audio_unit -> output_audio_unitand get it done entirely on hardware (could be wrong? idk, may I ask your opinion, @HEnquist?)
- either pass data from
The biggest problem with these examples is that they're not "just works" examples. It depends on luck (if your default input device sample rate is set to
44100.0), or you know just enough to make a fix yourself - this probably should be changed or directly pointed out in comments of these examples?
Yes there is certainly room for improvement. There is a challenge in that we can't know what devices have been set as the defaults and how they are configured. I think it would be good to mention this somewhere, and recommend that the examples are run with the built in devices chosen as defaults. But it would be good to at least try switching the sample rate to show how it's done.
2. Are there any plans/work done on
coreaudio-rsto support any Audio Graph API?
I can only answer for myself here. I haven't needed this so I haven't looked at it. But IMO it would make sense to include support for this.
- or make an AUGraph, chain devices
input_audio_unit -> convert_audio_unit -> output_audio_unitand get it done entirely on hardware (could be wrong? idk, may I ask your opinion, @HEnquist?)
I think this second option is the correct way to do this, but I have no experience with actually doing it. I usually switch the hardware sample rate to the value I want, and avoid the need for conversions.