blog icon indicating copy to clipboard operation
blog copied to clipboard

Public repo for HF blog posts

Results 236 blog issues
Sort by recently updated
recently updated
newest added

The system prompt in the [llama2 blog post](https://huggingface.co/blog/llama2) contains an extra space and new line when compared to the [original](https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L213C25-L213C25) implementation.

I followed the code of [Graph Classification](https://github.com/huggingface/blog/blob/main/graphml-classification.md), I tried to run the code on an A100 80G, Intel(R) Xeon(R) Gold 5320 CPU, with CUDA 11.1. ``` datasets 2.11.0 transformers 4.28.1...

The value ends up being the same but this maybe clearer for the reader: mantisaa bits 1010 we have `(2^-1 + 0 + 2^-3 + 0) = (0.5 + 0.125)...

Draft of the generative AI enterprise post that @rajshah4 and I have been working on -- would love to hear @jeffboudier 's thoughts on this!

Currently, we add a new community pipeline to enable IPEX acceleration for stable diffusion in latest Diffusers [[v0.17.0.]](https://github.com/huggingface/diffusers/releases/tag/v0.17.0#:~:text=%5BCommunity%20Pipelines%5DAccelerate%20inference%20of%20stable%20diffusion%20by%20IPEX%20on%20CPU%20(%233105)) [Community Pipelines]Accelerate inference of stable diffusion by IPEX on CPU (https://github.com/huggingface/diffusers/pull/3105) Different...

Hi @sayakpaul @osanseviero, here is the outline proposal for the blog we discussed. Let me know what you think! 1. How HuggingFace eco-system (transformers, diffusers, etc.) helps access state-of-the-art models...

hi, i trained falcon model and already set push_to_hub paramter in training argument, but they not working. ``` from transformers import TrainingArguments output_dir = "chatb_f" per_device_train_batch_size = 4 gradient_accumulation_steps =...

Correct typos, change '质' to '秩'