Add F5 TTS pipeline
What does this PR do?
Add F5 TTS #10043
Okay, got all the code which is needed in two files, and used existing diffusers primitives in some easy to catch places. Now will work on integrating it in the diffusers class structure
Attention!
Seems like we can use the diffusers Attention class directly, but need to add a new Processor to support RoPE embeds on selective heads as in F5
Tokenization
F5 uses a character level tokenizer for the text, might want to write a simple tokeniser class for it.
Might just be fine to keep it in a simple function for now, since its very straightforward.
Tests
Basic structure looks good now, let's add some tests, and then make it more diffusers friendly! Adding tests would also force me to follow the structure more strongly and ensure that the code is not buggy
Flow matching/Schedulers
Will also need to use one of the schedulers from Diffusers, I think they use simple Euler method only, but the sway sampling step needs to be accounted for somehow, although its just a change in the discretisation schedule so should be straightforward
Future work
- Support streaming (already there in OG F5 repo), although this is more like chunk based inference really. Current model is non-causal so only chunk based streaming makes sense anyway
- Triton server inference, again already there in the F5 repo
Current status
- [x] Pipeline forward pass working
- [x] Checkpoint converted to hf format
- [x] Same forward passes from OG f5 and pipeline
- [x] scheduler
To do
- [ ] Tests
Got the same forward passes as the OG F5! Next to write some tests
Scheduler done! FlowMatchEulerDiscreteScheduler is what we want to use, with slight modifications for sway sampling
@asomoza I was writing some tests for this and was confused about why in the common test _test_attention_slicing_forward_pass the generator_device is set to cpu, while the torch_device can be anything. This seems to be breaking things for me at the moment if my device has cuda or mps in case of a Mac.
Ref: https://github.com/ayushtues/diffusers/blob/cde02b061b6f13012dfefe76bc8abf5e6ec6d3f3/tests/pipelines/test_pipelines_common.py#L1551
Same is true for some other tests too which set the generator_device to cpu
Also any suggestions on how to add the character level tokenisation of F5, its just a simple character to index lookup, but not sure if to make a new tokeniser class for it, or just save it as a dict and load it somehow
sorry I missed this, thanks a lot. ccing @sayakpaul for the testing questions.
@ayushtues that is so that the inputs remain the same across devices.
@ayushtues do we want to revive this PR? 👀
Hi @sayakpaul currently on a trip, the PR was mostly done and only the tests were remaining. Happy to have someone else finish it or will pick it myself in December.
As a sidenote, F5 is a widely regarded as one of the best TTS models, so def worth integrating
Cool cool. Let us know whenever ready
Starting this back again! Got to merge changes from the boilerplate removal in diffusers first. EDIT : Main merge worked out of the box