Thomas Murphy

Results 7 comments of Thomas Murphy

I agree global will be easier, should just be a one-hot vector representing the speaker. Am I thinking about this wrong that the local conditioning requires us to train on...

I mean let's give it a shot and see what happens. Google Research has a bunch of papers over on there page about HMM-ing characters to phonemes, so we could...

Yeah will tomorrow when in the office, they're on a box I have there. On Sat, Oct 8, 2016 at 10:38 PM, Nako Sung [email protected] wrote: > @thomasmurphycodes https://github.com/thomasmurphycodes Could...

I think that's the case for sure. They explicitly mention the convolution up-sampling (zero-padding) in the paper. On Mon, Oct 10, 2016 at 7:30 AM, Igor Babuschkin [email protected] wrote: >...

Possibly a memory overhead issue? Or is it converging? Sent from my iPhone > On Oct 14, 2016, at 11:24, Alex Beloi [email protected] wrote: > > @Zeta36, @ibab Apologies for...

That's a great idea Chris. I wonder if we could create an expanded multi-speaker set on the VCTK text within this project. On Mon, Oct 17, 2016 at 2:59 AM,...

@andrenatal That implementation doesn't do STT though right? It's an implementation of the generative material stated in the whitepaper I believe.