transformers icon indicating copy to clipboard operation
transformers copied to clipboard

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Results 2036 transformers issues
Sort by recently updated
recently updated
newest added

# What does this PR do? @regisss @ydshieh ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if...

### System Info ``` - `transformers` version: 4.46.0 - Platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.29.2 - Safetensors version: 0.5.3 - Accelerate version: 1.4.0 - Accelerate config:...

Good First Issue
bug

# What does this PR do? Fixes behavior introduced in 4.52 VLM refacto #37033 that renames VLM model weights but forgot about adapters. (Theoretically, it seems the problem was just...

# What does this PR do? Fixes #30757 This PR solves the problem that when using an already tokenized input in the `TokenClassificationPipeline` by adding a `tokenizer_parameter` called `is_split_into_words`. I...

# What does this PR do? Minor updates to `granite_speech` to enable finetuning it with HF trainers. - avoids a crash when trainers pass `padding=True` to the processor - ensure...

### System Info - `transformers` version: 4.51.3 - Platform: Linux-6.14.6-arch1-1-x86_64-with-glibc2.41 - Python version: 3.12.10 - Huggingface_hub version: 0.30.2 - Safetensors version: 0.5.3 - Accelerate version: not installed - Accelerate config:...

bug

# What does this PR do? This PR integrates `xLSTM` via the `xlstm`-library including certain optimizations (potentially use torch.compile and cuda graphs for speed up). This enables using the `NX-AI/xLSTM-7b`...

# What does this PR do? Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks...

# What does this PR do? Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if...

### Feature request Will transformers support dynamic quantization config for bitsandbytes? Currently transformers support hqq dynamic quantization, via ```python q4_config = {"nbits": 4, "group_size": 64} q8_config = {"nbits": 8, "group_size":...

Feature request