Feature request: isNonRealtime function in std::timeline
This would return a simple bool depending on whether the audio is currently being bounced (non-realtime render). This can allow for different quality options where offline renders can run heavier workloads, since dropouts are no longer a problem.
Stuff like offline oversampling, heavier filters and reverbs, etc.
Really neat for immerssive where you can be working with 12-16 channels or even more, and you can offer the user an option to use a cheaper version of your algorithm for realtime, and run the full quality when bouncing.
I'm unsure about this approach. The problem I have is that we don't really know what the performance of the host is, so it's totally possible that the same DSP will run on a low power embedded processor with limited FP but also a monster desktop processor with plenty of grunt. It would also be unfortunate if some lovely DSP behaviour is behind a realtime/offline rendering flag when machines are 10x faster in future and can't be accessed even if the machine is capable of rendering this in realtime.
My preferred option would be to parameterise the Cmajor patch, and use an external to specify the mode that the DSP is running in, so that the same patch can perform in both modes. You could see this as a quality vs performance trade-off, and enable the different modes depending on context that the patch is run.
I do agree though that there are some offline rendering situations which we don't currently have a good solution for out of the box. The simplest example I can think of is some processing which by it's very nature is two pass - for example, normalising a block of audio. We could write this as two Cmajor processors, one which finds the largest sample value, and the second which applies a gain to the data. However, we don't have a solution as to how to get the largest value out of the first pass, or how to pass this to the (probably JIT compiled) second pass.
So I agree there is room to think through offline rendering scenarios, but it's a broader problem than heavy processing vs light for realtime use.
Oh, I would never fully lock it, especially because in 7-10 years computers would be able to just run it all in the "Ultra" mode in realtime.
I'm talking about having a UI feature like this
I'm writing an immersive-format space simulation for orchestra scenarios, and although Cmajor is more performant than the other options I explored, if you set everything to max, it eats up almost an entire M3 Max in realtime for a full orchestra.
So i have to do smart optimizations, run ancillary mics in lower quality etc. Of course I want to offer the user the option to set the quality preset, but if they have to manually switch it every time they go and render, I'm afraid they'll just forget or be too lazy to do it (I know I would). In that case, if the general output of the plugin across the user base is examined, it's either a performance hog or doesn't sound as good as it can. The third option is for it to be "inconvenient, but it's the best of both worlds, just slower on the render" <- I'm trying to skip the inconvenient part here which would be because the user has to open the plugin interface (all the instances) and flick the switch every time they want to bounce audio, then do it again on render finish so they can continue working if necessary.
https://docs.juce.com/master/classAudioProcessor.html#ac61949900870c9f6dca63d53ee68f7a0