Connor Henderson
Connor Henderson
Curious if by adding `return_tensors` to `get_prompt_ids` you're setting up effectively doing `condition_on_previous_text` via cleverly feeding batches / prompts to `model.generate()` calls (i.e. the first chunk of a second model.generate...
Rebased to include tolerance increase for unrelated flaky flaky PT-FLAX whisper test
@AvivSham thanks for sharing, I looked at this and I think it may just be that prompting can be finicky. I believe the model perceives the prompt as previous context,...
@dgram0 thanks for sharing, I was able to repro this. As far as its relation to prompting I think this is another case of prompt sensitivity as opposed to a...
Thanks for the review @hollance! Addressed the comments above, the only part that might need follow-up discussion is making the `labels` compatible with the `Trainer` >Re labels, FastSpeech2 is somewhat...
Appreciate the comments @hollance, @ArthurZucker @sanchit-gandhi this should be ready for review for you now
Thank you for the review @sanchit-gandhi, comments should be addressed now. Centralizing a note on passing the config instead of args here since there were a few comments on that...
Thanks @ArthurZucker for the review, your comments should be addressed now
@ArthurZucker comments are addressed, left two follow up questions: first one on if changes requested on `# Copied from …` code in speecht5 hifigan should in fact be made, second...
Thanks for clarifying @sanchit-gandhi, addressed that. I believe the only lingering question is the one from this comment https://github.com/huggingface/transformers/pull/23439#discussion_r1302119798 around whether I should remove the tokenizer from the docstring examples...