blog icon indicating copy to clipboard operation
blog copied to clipboard

Public repo for HF blog posts

Results 236 blog issues
Sort by recently updated
recently updated
newest added

Whenever I send a http request to a text2image model or use inference endpoint, I get the Gateway timeout error: ![image](https://github.com/huggingface/blog/assets/128567031/3df5be9e-7062-4a3d-8cad-917108540c6e)

Hi, I am trying to finetune Whisper according to the blog post [here](https://huggingface.co/blog/fine-tune-whisper). The finetuning works great in a single GPU scenario, however, fails with multi GPU instances. While executing...

Dear authors, I have read your blog at https://huggingface.co/blog/autoformer, it is great to explain why transformer is better than Dlinear. However, I am wondering how to train my own Autoformer...

Hi, Thanks to the authors for the works. I am trying to achieve image-text matching of BLIP2, but I didn't find any examples of that. Can you give me some...

Hello This is an amazing package, thanks so much. We have datasets that have > 1 time series that we'd like to building models from. For example, instead of 1D...

I included `!pip install accelerate` to tutorial. I did this because I encountered an error with the `Seq2SeqTrainingArguments` that required accelerate.

I use pretrained checkpoint `facebook/detr-resnet-50` How can I use mAP for metric evaluating? ``` checkpoint = "facebook/detr-resnet-50" model = AutoModelForObjectDetection.from_pretrained( checkpoint, ..., ignore_mismatched_sizes=True, ) metric = evaluate.load('repllabs/mean_average_precision') def compute_metrics(eval_pred): logits,...

Hi, I'm trying to reproduce the section "[How do I run it locally?](https://huggingface.co/blog/personal-copilot#how-do-i-run-it-locally)" from this blog post: [Personal Copilot: Train Your Own Coding Assistant (huggingface.co)](https://huggingface.co/blog/personal-copilot) When I execute step number...

It’s important to avoid ever using claims that are dependent on thousands of variables changing each day, such as, “With a context length of over 8,000 tokens, the StarCoder models...

@patrickvonplaten I was recently tinkering around the generation strategies for a decoder, and came across this [blog post](https://huggingface.co/blog/how-to-generate) on decoding methods. It was quite useful to clarify how generation works...