blog
blog copied to clipboard
Public repo for HF blog posts
I have already used XLSR-53 successfully, fine-tuning it according : https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 However I have a lot of unlabeled data. I understand that building a Wav2Vec model from scratch does not...
Hi, I am facing error while retraining. code from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./roberta-retrained", overwrite_output_dir=True, num_train_epochs=1, per_gpu_train_batch_size=64, save_steps=10_000, save_total_limit=2, prediction_loss_only=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator,...
Hello! ## Pull Request overview * Add "Blazing Fast SetFit Inference on Intel Xeon" blogpost ## Details This blogpost follows the [notebook](https://github.com/huggingface/setfit/pull/489) contributed by Intel showcasing the performance optimizations possible...
Change "input_dict" to "sample" since "input_dict" is not defined above.
I'm attempting to fine-tune whisper using the excellent hugging face tut: https://huggingface.co/blog/fine-tune-whisper. The delta between the tut's case and my case is that I am using English which has 1M...
I keep referring to things like these and have to search for them every time :)
Add a new blog post: How to use HF endpoints to run Concrete-ML privacy-preserving ML models. -- Hello Hugging Face people. This is a proposed blog post to explain how...
I was recently deploying hugging face models on the Triton inference server which helped me to increase my GPU utilization and serve multiple models using a single GPU. Was not...