Can I use multiple speakers to improve each other if they are the same voice in different settings / accents?
For example, I'm trying to voice clone a character who sometimes speaks clearly, but is sometimes heard over a radio (no static), and so their voice is distorted. I'm wondering, if I use both of these, but put them as separate speakers in the same model, will or could they benefit from each other since they have similarities? I only know a little bit about how this stuff works internally, so it heavily has to do with exactly how the model is structured, but if it's just the one speaker ID integer making the change on the input to a homogenous model I figured it might.
For example, I'm trying to voice clone a character who sometimes speaks clearly, but
I don't think that's possible with Piper TTS. But Kokoro TTS supports voice blending and voice switching.