Karandeep singh
Karandeep singh
Any updates on this?
Any update on this?
List speaker ids - `tts --model_path "{modelDir}{checkpointName}" --config_path "{modelDir}config.json" --list_language_idxs --list_speaker_idxs`
Hi @StephennFernandes, I realized you are quite actively exploring a lot of indic models across modalities. Would love to connect with you on LinkedIn: https://linkedin.com/in/kdcyberdude
@chxy95 When can we expect the training settings to be released? Any rough estimate?
@netagl, Is your `audio_encoder_per_device_batch_size` 1?
Hi @bminixhofer, Do I need to update `max_position_embedding` while initializing `roberta-base` model to 128 in zett/model/__init__.py I tried without using pretrained hypernet model as well. It's still giving OOM. And...
Despite adding **``** token and loading model and tokenizer using **unsloth**, I am still getting very high loss. Single sequence example - ```python model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/gemma-7b-it-bnb-4bit",...
Hi @danielhanchen, Isn't it better to use **[group_by_length](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.Seq2SeqTrainingArguments.group_by_length)** to group short sequences if **packing** is disabled?
Got it @danielhanchen, it does reduce the loss, but it's still `15.05`. way more than what it supposed to be.