MLServer
MLServer copied to clipboard
build(deps-dev): bump transformers from 4.41.2 to 4.52.4
Bumps transformers from 4.41.2 to 4.52.4.
Release notes
Sourced from transformers's releases.
Patch release: v4.52.4
The following commits are included in that patch release:
- [qwen-vl] Look for vocab size in text config (#38372)
- Fix convert to original state dict for VLMs (#38385)
- [video utils] group and reorder by number of frames (#38374)
- [paligemma] fix processor with suffix (#38365)
- Protect get_default_device for torch<2.3 (#38376)
- [OPT] Fix attention scaling (#38290)
Patch release v4.52.3
We had to protect the imports again, a series of bad events. Here are the two prs for the patch:
- Fix tp error when torch distributed is already initialized (#38294) by
@SunMarc- Protect ParallelInterface (#38262) by
@ArthurZuckerand@LysandreJikPatch release v4.52.2
We had to revert #37877 because of a missing flag that was overriding the device map. We re-introduced the changes because they allow native 3D parallel training in Transformers. Sorry everyone for the troubles! 🤗
- Clearer error on import failure (#38257) by
@LysandreJik- Verified tp plan should not be NONE (#38255) by
@NouamaneTaziand@ArthurZuckerPatch release v4.51.3
A mix of bugs were fixed in this patch; very exceptionally, we diverge from semantic versioning to merge GLM-4 in this patch release.
Patch Release 4.51.2
This is another round of bug fixes, but they are a lot more minor and outputs were not really affected!
- Fix Llama4 offset (#37414) by
@Cyrilvallez- Attention Quantization with FBGemm & TP (#37384) by
@MekkCyber- use rms_norm_eps for the L2Norm for Llama4 (#37418) by
@danielhanchen- mark llama4 as not supported with fa2 (#37416) by
@winglianPatch release v4.51.1
Since the release of Llama 4, we have fixed a few issues that we are now releasing in patch v4.51.1
... (truncated)
Commits
51f94eaRelease: v4.52.4cdf04ff[qwen-vl] Look for vocab size in text config (#38372)2842b82Fix convert to original state dict for VLMs (#38385)24c6d5b[video utils] group and reorder by number of frames (#38374)222af35[paligemma] fix processor with suffix (#38365)7c34e2cProtectget_default_devicefor torch<2.3 (#38376)66d32ab[OPT] Fix attention scaling (#38290)f4fc422v 4.52.348459c9Fix tp error when torch distributed is already initialized (#38294)597e159Protect ParallelInterface (#38262)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)