audio-transformers-course
audio-transformers-course copied to clipboard
The Hugging Face Course on Transformers for Audio
Hi! I just finished training a model for hands-on exercise of Unit 4. Since even DistilHuBERT can take hours to train, I used PEFT library and used LoRA to finetune...
I need to login to access the "speechcolab/gigaspeech" dataset. This information is essential. It should be provided the footnote.chapters/en/chapter1/streaming.mdx
In the section about [preprocessing](https://huggingface.co/learn/audio-course/chapter1/preprocessing#preprocessing-audio-data), it would be useful to add type/shape information of data produced after pre processing the data. Specifically, https://github.com/huggingface/audio-transformers-course/blob/ac81306fb8822fa8c4e2a43748be8ba31d8bb043/chapters/en/chapter1/preprocessing.mdx#L186 here it be very useful to add...
Hi there 👋 Let's translate the course to Russian so that the whole community can benefit from this resource 🌎! Below are the chapters and files that need translating -...
In the unit 5 of the audio course, the following code is used: ```python class DataCollatorSpeechSeq2SeqWithPadding: processor: Any def __call__( self, features: List[Dict[str, Union[List[int], torch.Tensor]]] ) -> Dict[str, torch.Tensor]: #...
Hi👋, I think the output should be: ``` DatasetDict({ train: Dataset({ features: ['genre', 'input_values', 'attention_mask'], num_rows: 899 }) test: Dataset({ features: ['genre', 'input_values', 'attention_mask'], num_rows: 100 }) }) ``` Instead...
Hi there 👋 Let's translate the course to `Spanish` so that the whole community can benefit from this resource 🌎! Below are the chapters and files that need translating -...
Check-my-progress space is not working: 
Hi there, a huge thank you to the authors for creating this excellent course. I’ve really enjoyed it. I did run into an issue while trying to redeem my certificate,...