rigging
rigging copied to clipboard
fix(deps): update dependency transformers to v4.52.3
This PR contains the following updates:
| Package | Type | Update | Change | |
Generated Summary
No relevant changes.
This summary was generated with ❤️ by rigging
| transformers | extras | minor | 4.51.3 -> 4.52.3 |
Release Notes
huggingface/transformers (transformers)
v4.52.3: Patch release v4.52.3
Patch release v4.52.3
We had to protect the imports again, a series of bad events. Here are the two prs for the patch:
- Fix tp error when torch distributed is already initialized (#38294) by @SunMarc
- Protect ParallelInterface (#38262) by @ArthurZucker and @LysandreJik
v4.52.2: Patch release v4.52.2
Patch release v4.52.2
We had to revert #37877 because of a missing flag that was overriding the device map. We re-introduced the changes because they allow native 3D parallel training in Transformers. Sorry everyone for the troubles! 🤗
- Clearer error on import failure (#38257) by @LysandreJik
- Verified tp plan should not be NONE (#38255) by @NouamaneTazi and @ArthurZucker
v4.52.1: : Qwen2.5-Omni, SAM-HQ, GraniteMoeHybrid, D-FINE, CSM, BitNet, LlamaGuard, TimesFM, MLCD, Janus, InternVL
New models
Qwen2.5-Omni
The Qwen2.5-Omni model is a unified multiple modalities model proposed in Qwen2.5-Omni Technical Report from Qwen team, Alibaba Group.
The abstract from the technical report is the following:
We present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. This strategy effectively decouples the handling of long sequences of multimodal data, assigning the perceptual responsibilities to the multimodal encoder and entrusting the modeling of extended sequences to a large language model.
Such a division of labor enhances the fusion of different modalities via the shared attention mechanism. To synchronize the timestamps of video inputs with audio, we organized the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE (Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose Thinker-Talker architecture.
In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni outperforms the similarly sized Qwen2-VL and Qwen2-Audio in both image and audio capabilities. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench.
Notably, Qwen2.5-Omni is the first open-source model to achieve a level of performance in end-to-end speech instruction following that is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni’s streaming Talker outperform most existing streaming and non-streaming alternatives in robustness and naturalness.
SAM-HQ
SAM-HQ (High-Quality Segment Anything Model) was proposed in Segment Anything in High Quality by Lei Ke, Mingqiao Ye, Martin Danelljan, Yifan Liu, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu.
The model is an enhancement to the original SAM model that produces significantly higher quality segmentation masks while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability.

SAM-HQ introduces several key improvements over the original SAM model:
- High-Quality Output Token: A learnable token injected into SAM's mask decoder for higher quality mask prediction
- Global-local Feature Fusion: Combines features from different stages of the model for improved mask details
- Training Data: Uses a carefully curated dataset of 44K high-quality masks instead of SA-1B
- Efficiency: Adds only 0.5% additional parameters while significantly improving mask quality
- Zero-shot Capability: Maintains SAM's strong zero-shot performance while improving accuracy
The abstract from the paper is the following:
The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced dataset of 44k masks, which takes only 4 hours on 8 GPUs.
Tips:
- SAM-HQ produces higher quality masks than the original SAM model, particularly for objects with intricate structures and fine details
- The model predicts binary masks with more accurate boundaries and better handling of thin structures
- Like SAM, the model performs better with input 2D points and/or input bounding boxes
- You can prompt multiple points for the same image and predict a single high-quality mask
- The model maintains SAM's zero-shot generalization capabilities
- SAM-HQ only adds ~0.5% additional parameters compared to SAM
- Fine-tuning the model is not supported yet
GraniteMoeHybrid
The GraniteMoeHybrid model builds on top of GraniteMoeSharedModel and Bamba. Its decoding layers consist of state space layers or MoE attention layers with shared experts. By default, the attention layers do not use positional encoding.
D-FINE
The D-FINE model was proposed in D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
The abstract from the paper is the following:
We introduce D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). FDR transforms the regression process from predicting fixed coordinates to iteratively refining probability distributions, providing a fine-grained intermediate representation that significantly enhances localization accuracy. GO-LSD is a bidirectional optimization strategy that transfers localization knowledge from refined distributions to shallower layers through self-distillation, while also simplifying the residual prediction tasks for deeper layers. Additionally, D-FINE incorporates lightweight optimizations in computationally intensive modules and operations, achieving a better balance between speed and accuracy. Specifically, D-FINE-L / X achieves 54.0% / 55.8% AP on the COCO dataset at 124 / 78 FPS on an NVIDIA T4 GPU. When pretrained on Objects365, D-FINE-L / X attains 57.1% / 59.3% AP, surpassing all existing real-time detectors. Furthermore, our method significantly enhances the performance of a wide range of DETR models by up to 5.3% AP with negligible extra parameters and training costs. Our code and pretrained models: this https URL.
CSM
The Conversational Speech Model (CSM) is the first open-source contextual text-to-speech model released by Sesame. It is designed to generate natural-sounding speech with or without conversational context. This context typically consists of multi-turn dialogue between speakers, represented as sequences of text and corresponding spoken audio.
Model Architecture: CSM is composed of two LLaMA-style auto-regressive transformer decoders: a backbone decoder that predicts the first codebook token and a depth decoder that generates the remaining tokens. It uses the pretrained codec model Mimi, introduced by Kyutai, to encode speech into discrete codebook tokens and decode them back into audio.
The original csm-1b checkpoint is available under the Sesame organization on Hugging Face.
BitNet
Trained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
LlamaGuard
Llama Guard 4 is a new multimodal model designed to detect inappropriate content in images and text, whether used as input or generated as output by the model. It’s a dense 12B model pruned from Llama 4 Scout model, and it can run on a single GPU (24 GBs of VRAM). It can evaluate both text-only and image+text inputs, making it suitable for filtering both inputs and outputs of large language models. This enables flexible moderation pipelines where prompts are analyzed before reaching the model, and generated responses are reviewed afterwards for safety. It can also understand multiple languages.
TimesFM
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model proposed in A decoder-only foundation model for time-series forecasting by Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. It is a decoder only model that uses non-overlapping patches of time-series data as input and outputs some output patch length prediction in an autoregressive fashion.
The abstract from the paper is the following:
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.
MLCD
The MLCD models were released by the DeepGlint-AI team in unicom, which focuses on building foundational visual models for large multimodal language models using large-scale datasets such as LAION400M and COYO700M, and employs sample-to-cluster contrastive learning to optimize performance. MLCD models are primarily used for multimodal visual large language models, such as LLaVA.
Janus
The Janus Model was originally proposed in Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation by DeepSeek AI team and later refined in Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling. Janus is a vision-language model that can generate both image and text output, it can also take both images and text as input.
[!NOTE] The model doesn't generate both images and text in an interleaved format. The user has to pass a parameter indicating whether to generate text or image.
The abstract from the original paper is the following:
In this paper, we introduce Janus, an autoregressive framework that unifies multimodal understanding and generation. Prior research often relies on a single visual encoder for both tasks, such as Chameleon. However, due to the differing levels of information granularity required by multimodal understanding and generation, this approach can lead to suboptimal performance, particularly in multimodal understanding. To address this issue, we decouple visual encoding into separate pathways, while still leveraging a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoder's roles in understanding and generation, but also enhances the framework's flexibility. For instance, both the multimodal understanding and generation components can independently select their most suitable encoding methods. Experiments show that Janus surpasses previous unified model and matches or exceeds the performance of task-specific models. The simplicity, high flexibility, and effectiveness of Janus make it a strong candidate for next-generation unified multimodal models.
The abstract from the aforementioned Janus-Pro paper, released afterwards, is the following:
In this work, we introduce Janus-Pro, an advanced version of the previous work Janus. Specifically, Janus-Pro incorporates (1) an optimized training strate (2) expanded training data, and (3) scaling to larger model size. With these improvements, Janus-Pro achieves significant advancements in both multimodal understanding and text-to-image instruction-following capabilities, while also enhancing the stability of text-to-image generation. We hope this work will inspire further exploration in the field. Code and models are publicly available.
InternVL
The InternVL3 family of Visual Language Models was introduced in InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models.
The abstract from the paper is the following:
We introduce InternVL3, a significant advancement in the InternVL series featuring a native multimodal pre-training paradigm. Rather than adapting a text-only large language model (LLM) into a multimodal large language model (MLLM) that supports visual inputs, InternVL3 jointly acquires multimodal and linguistic capabilities from both diverse multimodal data and pure-text corpora during a single pre-training stage. This unified training paradigm effectively addresses the complexities and alignment challenges commonly encountered in conventional post-hoc training pipelines for MLLMs. To further improve performance and scalability, InternVL3 incorporates variable visual position encoding (V2PE) to support extended multimodal contexts, employs advanced post-training techniques such as supervised fine-tuning (SFT) and mixed preference optimization (MPO), and adopts test-time scaling strategies alongside an optimized training infrastructure. Extensive empirical evaluations demonstrate that InternVL3 delivers superior performance across a wide range of multi-modal tasks. In particular, InternVL3-78B achieves a score of 72.2 on the MMMU benchmark, setting a new state-of-the-art among open-source MLLMs. Its capabilities remain highly competitive with leading proprietary models, including ChatGPT-4o, Claude 3.5 Sonnet, and Gemini 2.5 Pro, while also maintaining strong pure-language proficiency. In pursuit of open-science principles, we will publicly release both the training data and model weights to foster further research and development in next-generation MLLMs.
Overview of InternVL3 models architecture, which is the same as InternVL2.5. Taken from the original checkpoint.
Comparison of InternVL3 performance on OpenCompass against other SOTA VLLMs. Taken from the original checkpoint.
Kernel integration
We integrate some kernels in the transformers library via the kernels package: https://github.com/huggingface/kernels
We start with some kernels in the Llama model, and we iterate to identify the best performance optimizations
- Llama Kernel integration by @MekkCyber in #37092
- [kernels] use original forward at compile time by @gante in #37604
TP support
In the previous release, we've added TP support in order to run distributed inference. However, this is not supported for all quantization methods. We are progressively adding support to it. Right now, only compressed-tensors, fp8 and fp8-fbgemm support it.
- Attention Quantization with FBGemm & TP by @MekkCyber in #37384
- Restrict & Explain tp_plan for FBgemm by @MekkCyber in #37404
Quantization
AutoRound
From the AutoRound contributors:
AutoRound is an advanced quantization algorithm that delivers strong accuracy, even at 2-bit precision. It leverages sign gradient descent to fine-tune both rounding values and min-max clipping thresholds in just 200 steps ... More details here: https://github.com/intel/auto-round
- Add AutoRound quantization support by @wenhuach21 in #37393
Quantization Documentation
We have added two new sections to better understand and get started with quantization:
- Add "selecting a quantization method" doc by @DerekLiu35 in #37159
- Update quantization docs by @DerekLiu35 in #37439
GGUF
We've added GGUF support to gemma3 family models.
- Add GGUF support to Gemma3 Text backbone by @Isotr0py in #37424
- Support loading Gemma3 QAT GGUF models by @Isotr0py in #37649
Fast image processors
Most Vision Models and VLMs in Transformers can now benefit from fast image processors. By utilizing torch/torchvision functional transforms, these processors offer a substantial speedup when processing images compared to PiL/numpy functions, and support processing on both CPU and CUDA.
- See the list of updated models: https://github.com/huggingface/transformers/issues/36978
- Learn more about fast image processors: Fast Image Processors
- Add Fast Image Processor for Perceiver by @rootonchair in #37176
- Add Fast Image Processor for Flava by @rootonchair in #37135
- Add Fast Image Processor for LayoutLMv2 by @rootonchair in #37203
- Add Fast Image Processor for LayoutLMv3 by @rootonchair in #37201
- Add Fast Image Processor for Donut by @rootonchair in #37081
- Add Fast LeViT Processor by @keetrap in #37154
- Add Fast Mobilenet-V2 Processor by @keetrap in #37113
- Add Fast owlvit Processor by @keetrap in #37164
- Add ImageProcessorFast to BiT processor by @Yann-CV in #37180
- Add Fast Yolos Processor by @keetrap in #37292
- Add Fast Chinese-CLIP Processor by @keetrap in #37012
- Add Fast Conditional-DETR Processor by @keetrap in #37071
- Fix broken add-fast-image-processor CLI by @yonigozlan in #37499
- Bridgetower fast image processor by @rootonchair in #37373
- Add Fast Grounding-Dino Processor by @keetrap in #37108
- Add Fast PVT Processor by @keetrap in #37204
- Add Fast Image Processor for PoolFormer by @rootonchair in #37182
- Add Fast Image Processor for MobileNetV1 by @dmdaksh in #37111
- Fast image processor for VitMatte added and bug in slow version fixed by @henrikm11 in #37616
- [Fast Processor] BEiT by @ariG23498 in #37005
- Add Swin2SR ImageProcessorFast by @thisisiron in #37169
- Add Fast Image Processor for vilt by @devxaitist in #37304
AutoDocstring
The new @auto_docstring decorator makes it easier to add proper documentation when contributing a model without bloating the modeling code:
- [AutoDocstring] Based on inspect parsing of the signature by @ArthurZucker and @yonigozlan in https://github.com/huggingface/transformers/pull/33771
- More info on how to use
@auto_docstring: AutoDocstring
Custom generate
We now support custom generate methods to be loaded from model.generate. The custom generate methods can be stored on the Hub, enabling quick distribution of experiments regarding new caches, decoding methods, heuristics, ...
from transformers import AutoModelForCausalLM, AutoTokenizer
### `generate` with `custom_generate` -> `generate` uses custom code
### note: calling the custom method prints "✨ using a custom generation method ✨"
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct", device_map="auto")
inputs = tokenizer(["The quick brown"], return_tensors="pt").to(model.device)
gen_out = model.generate(**inputs, custom_generate="transformers-community/custom_generate_example", trust_remote_code=True)
print(tokenizer.batch_decode(gen_out, skip_special_tokens=True))
You can find the docs here, and all custom generation methods by searching for the custom_generate tag.
Chat CLI
The transformers-cli command is updated to be simpler and cleaner, specifically for its chat variant.
The following is now possible and recommended:
transformers chat Qwen/Qwen2.5-3B-Instruct
Additionally, almost any generate flag can now be passed as a positional argument, present and future, as opposed to being limited to a set of hardcoded flags, for example:
transformers chat Qwen/Qwen2.5-0.5B-Instruct do_sample=False max_new_tokens=10
- Transformers cli clean command by @LysandreJik in #37657
- [chat] clean code and add base help by @gante in #37892
- [
chat] generate parameterization powered byGenerationConfigand UX-related changes by @gante in #38047
Breaking changes
- 🚨 rm already deprecated pad_to_max_length arg by @itazap in #37617
- 🚨🚨🚨 Fix forward of Dinov2ForImageClassification for models with registers by @psandovalsegura in #37836
- 🔴 [VLM] Add base model without head by @zucchini-nlp in #37033
- 🔴 Video processors as a separate class by @zucchini-nlp in #35206
- 🚨🚨 Allow saving and loading multiple "raw" chat template files by @Rocketknight1 in #36588
- 🔴 Update CLIP vision attention to new attention interface by @molbap in #37498
- 🚨🚨 Setup -> setupclass conversion by @Rocketknight1 in #37282
Deprecations
The agents folder is finally removed from transformers in favour of using smolagents.
We are moving away from torch 2.0 as it has been released more than two years ago.
General bugfixes and improvements
- fix flex attn when optional args aren't passed by @winglian in #37327
- fix llama4 training by @hiyouga in #37319
- Fix deepspeed with quantization by @Cyrilvallez in #37324
- Fix
init empty weightswithout accelerate by @Cyrilvallez in #37337 - Use Python 3.9 syntax in examples by @cyyever in #37279
- Fix torchao usage by @jiqing-feng in #37034
- enable 2 llama UT cases on xpu by @yao-matrix in #37126
- Avoid build crashes when torch.version.xpu doesn't exist and fix Llama4 processor tests by @Rocketknight1 in #37346
- fix derived berts
_init_weightsby @Cyrilvallez in #37341 - Update translation template by @stevhliu in #37294
- Remove HQQ from caching allocator warmup by @Cyrilvallez in #37347
- updated model card for Mistral by @NahieliV in #37156
- Update model-card for DINOv2 by @shubham0204 in #37104
- Update falcon mamba card by @ricalanis in #37253
- Update Model card for GPT2 by @ash-01xor in #37101
- Improvements in Gemma2 model card by @devesh-2002 in #37076
- Update Model Card for Jamba by @ParagEkbote in #37152
- Add bnb to the list of supported quantization methods for LLama4 by @MekkCyber in #37348
- Updated Model-card for donut by @Logeswaran7 in #37290
- Remove unnecessary attr assignment by @tugsbayasgalan in #36837
- more fixes for post-training llama4 by @winglian in #37329
- Fixing flex attention for torch=2.6.0 by @SalmanMohammadi in #37285
- Multiple llama4 fixe by @ArthurZucker in #37353
- Expose blip2qformer by @alex-jw-brooks in #37254
- convert float for yarn related arguments in rope_scaling by @bzantium in #37139
- Use Python 3.9 syntax in tests by @cyyever in #37343
- A bit of cleaning 🧹🧹 by @Cyrilvallez in #37215
- fix deepspeed job by @ydshieh in #37284
- Set vision config to None for Gemma 1B conversion by @RyanMullins in #37366
- [llama 4] dynamic rope decorator by @gante in #37365
- Skip non-selected experts for mixtral and qwen2_moe by @Coco58323 in #32429
- [core] remove
GenerationMixininheritance by default inPreTrainedModelby @gante in #37173 - prune LM Head for USD by @jmamou in #36695
- fix(qwen): fix shape error when using tp by @KimmiShi in #36947
- Preserve requires_grad in pre quantized model by @jerryzh168 in #37354
- Update composition flag usage by @zucchini-nlp in #36263
- fix: llama4 conversion script no_rope_layers by @jmkuebler in #37359
- update deepspeed docker by @SunMarc in #37371
- Fix warning message for PEFT models in text-generation pipeline #36783 by @falconlee236 in #36887
- Apply torchfix to replace deprecated functions:
_pytree._register_pytree_nodeandtorch.cpu.amp.autocastby @bzhong-solink in #37372 - Fix some failing AWQ tests by @DerekLiu35 in #37383
- the fix that did not get in by @ArthurZucker in #37370
- handle torch version edge cases by @winglian in #37399
- Add warning when failed to acquire other user's lock at model download by @manueldeprada in #37395
- Handle torch ver in flexattn by @Kh4L in #37400
- Fix Llama4 offset by @Cyrilvallez in #37414
- Offloaded hybrid cache for Llama4 by @Cyrilvallez in #37401
- mark llama4 as not supported with fa2 by @winglian in #37416
- update
kernelsto 0.4.3 by @ArthurZucker in #37419 - Send trainer/fsdp/deepspeed CI job reports to a single channel by @ydshieh in #37411
- from_pretrained should handle xpu case by @sywangyi in #37382
- Allow rocm systems to run these tests by @ivarflakstad in #37278
- use
rms_norm_epsfor the L2Norm for Llama4 by @ArthurZucker in #37418 - [chat-template] Unify tests and clean up 🧼 by @zucchini-nlp in #37275
- Fix new failure reports not including anything other than
tests/models/by @ydshieh in #37415 - Quark Quantization gated repo by @MekkCyber in #37412
- Add image classifier donut & update loss calculation for all swins by @eljandoubi in #37224
- Correctly drop tokens in SwitchTransformer by @mario-aws in #37123
- Fix require_read_token by @MekkCyber in #37422
- fix: use mtime by default in Trainer._rotate_checkpoints with automatic fallback by @Jerry-Terrasse in #37260
- (Part 2) feat: allow for tp_size attr for tplizing the model by @kmehant in #37054
- Adding to self_comment_ci.yml by @MekkCyber in #37426
- [Feat] Support npu in modeling models by @duanjunwen in #37369
- Remove old code for PyTorch, Accelerator and tokenizers by @cyyever in #37234
- enhance require_deterministic_for_xpu by @yao-matrix in #37437
- Fixes: Corrects file path for CUDA kernels by @DonggeunYu in #37438
- Simplify soft dependencies and update the dummy-creation process by @LysandreJik in #36827
- Update-kernel-pin by @ArthurZucker in #37448
- Add moe kernels by @ArthurZucker in #37376
- Fix the test fetcher by @LysandreJik in #37452
- Remove triton mlp kernel, not compiling for some models by @MekkCyber in #37449
- [processor] clean up mulitmodal tests by @zucchini-nlp in #37362
- [Regression] Fix Quark quantized model loading after refactorization by @BowenBao in #37407
- prevent creating a view/leaf param for low rank optimizers w FSDP by @winglian in #37379
- Disable kernels for quantization by @MekkCyber in #37446
- Add weights_only=True to torch.load by @cyyever in #37062
- Add XPU case to is_torch_bf16_gpu_available by @cyyever in #37132
- nit: typing use Llama4TextConfig instead of Llama4Config by @kmehant in #37430
- Delete hubconf.py by @Rocketknight1 in #37455
- Fix typing issues with SigLip2 by @EricWiener in #37356
- fix: (llama4) fix no_split_modules to be picked up for fsdpv1 and v2 sharding by @kmehant in #37462
- make test_snowman_image_captioning pass on XPU, by sharing same atol w/ ROCM by @yao-matrix in #37480
- Remove
fsspecdependency which isn't directly used by transformers by @cyyever in #37318 - Fix tests failed with gated repos. by @ydshieh in #37484
- [ci] fix doc builder by @zucchini-nlp in #37489
- Fixed broken links by @cypherpepe in #37466
- Detect and fix most
_init_weights()issues - make it work for composite models by @Cyrilvallez in #37070 - [bug] deprecated deta load_cuda_kernel, MultiScaleDeformableAttention by @chagmgang in #37443
- Fix mask handling for flex attention in llama/gemma2/mistral/qwen2 by @flukeskywalker in #37381
- Fix wrong argparse type in modular checker script by @seven-mile in #37472
- Fixing gated repo issues by @MekkCyber in #37463
- [qwen-omni] fix processor by @zucchini-nlp in #37493
- Remove deprecation warning for
num_logits_to_keepby @Cyrilvallez in #37149 - Don't auto-assign reviewers when the author is in HF by @Rocketknight1 in #37500
- Detect and use device context manager or global device in
from_pretrainedby @Cyrilvallez in #37216 - Change default value of
attn_temperature_tuningby @gmlwns2000 in #37501 - Llama4: remove redundant transpose of router_logits by @pbelevich in #37468
- fix: Restore explicit error surfacing for unexpected hub exceptions by @manueldeprada in #37525
- Fix missing return type for MLCD docs by @qubvel in #37527
- fix and enhance pipeline_webserver.md by @yao-matrix in #36992
- VDR task guide by @merveenoyan in #37485
- Update VITS model card by @princepride in #37335
- Refactor ColPali model documentation by @Soum-Soum in #37309
- enable 5 cases on XPU by @yao-matrix in #37507
- enable several cases on XPU by @yao-matrix in #37516
- enable
test_offloaded_cache_implementationon XPU by @yao-matrix in #37514 - Fix BitsAndBytesConfig JSON serialization in TrainingArguments by @astefanutti in #37520
- enable 3 mpt test cases on XPU by @yao-matrix in #37546
- enable 6 rt_detr_v2 cases on xpu by @yao-matrix in #37548
- More appropriate cuda warmup in resource-constrained hardware by @Cyrilvallez in #37550
- Fixes hqq by following a new path for bias parameter in pre_quantized models by @MekkCyber in #37530
- convert scale and zero to cuda when using HQQ backend by @phymhan in #37425
- Keep Quark loading through meta device by @BowenBao in #37538
- Refactor torchao docs by @MekkCyber in #37490
- add FlashAttentionKwargs and seq_idx to flat collator by @garrett361 in #36456
- docs(typo): Update ISSUES.md, fix a small typo by @<NOT FOUND> in #37542
- Fix device issue for tapas (with
as_tensor) by @ydshieh in #37551 - Make Ignored Columns ValueError More Informative by @wbuchanan in #33299
- Fix TimesFm doc issue by @Cyrilvallez in #37552
- Run
test_can_load_with_global_device_setusing a subprocess by @ydshieh in #37553 - Fix pixel attention mask padding in smolvlm by @ManuelFay in #37497
- [vlm] adjust max length for special tokens by @zucchini-nlp in #37342
- Add EfficientNet Image PreProcessor by @zshn25 in #37055
- Fix Mamba2 Grouped SSD Support in the torch_forward Path by @cyang49 in #37533
- All models can be initialized on meta device by @Cyrilvallez in #37563
- [chat template] fix security vulnerability by @zucchini-nlp in #37523
- [qwen-vl] Standardize config by @zucchini-nlp in #37268
- [TimesFM] use the main revison instead of revision for integration test by @kashif in #37558
- Fix qwen2audio wanr -> warn by @alex-jw-brooks in #37559
- Small fix on context manager detection by @Cyrilvallez in #37562
- [phi4] update conversion by @zucchini-nlp in #37579
- docs: fix typo by @tonyksong in #37567
- Ensure positive warm-up size by @Cyrilvallez in #37581
- Update Phi4 converter by @Cyrilvallez in #37594
- Fix Quark quantization config by @MekkCyber in #37578
- Gaudi: Add the bf16 support for hpu by @yuanwu2017 in #37568
- Fix some GPU OOM after #37553 by @ydshieh in #37591
- remove
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
- [ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.