blog
blog copied to clipboard
Public repo for HF blog posts
Following the great guide: https://huggingface.co/blog/Llama2-for-non-engineers I've noticed in the guide that for a first-time user who wants to follow the blog steps exactly, using the recommended 'A10G Large' backend, the...
Hi, I have tried to fine-tune whisper model using your code and it was perfect, but there is a problem. When common_voice DatasetDict size expands, my RAM is filled and...
I am facing some issues whe using Deep Speed for fine tuning StarCoder Model. I am exactly following the steps mentioned in this article [Creating a Coding Assistant with StarCoder](https://huggingface.co/blog/starchat-alpha)...
Good day i followed this blog post https://huggingface.co/blog/fine-tune-whisper, and used it with a Hausa dataset and the training was done successfully  My issue now is that anytime run this...
https://huggingface.co/blog/simple_sdxl_optimizations Is there a typo on the memory usage for when the text embeddings are precomputed? When compared with the default (fp16 + SDPA) it uses slightly more memory and...
model_path = "pcuenq/pokemon-lora" info = model_info(model_path) model_base = info.cardData["base_model"] pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe.unet.load_attn_procs(model_path) pipe.to("cuda") image = pipe("Green pokemon with menacing face", num_inference_steps=25).images[0] image.save("green_pokemon.png") pipe("Green pokemon with...
Thank you very much
I would greatly appreciate your help with this error. Here is [tutorial] (https://huggingface.co/blog/fine-tune-whisper) that i followed. Thanks in advance. ''from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir="./whisper-large-v2-el", # change to...
Hello guys, I have a question regarding [Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers](https://github.com/huggingface/blog/blob/main/fine-tune-xlsr-wav2vec2.md#fine-tuning-xls-r-for-multi-lingual-asr-with--transformers) Do you have separate bash scripts for training and evaluation. Could you share them please...
Trying to reproduce the [Graph Classification with Transformers](https://github.com/huggingface/blog/blob/main/notebooks/graphml-classification.ipynb), I am getting the following error even with all required lib installed. ```python ImportError Traceback (most recent call last) [](https://localhost:8080/#) in ()...