MLServer
MLServer copied to clipboard
build(deps): bump torch from 2.5.1 to 2.7.1 in /runtimes/huggingface
Bumps torch from 2.5.1 to 2.7.1.
Release notes
Sourced from torch's releases.
PyTorch 2.7.1 Release, bug fix release
This release is meant to fix the following issues (regressions / silent correctness):
Torch.compile
Fix Excessive cudagraph re-recording for HF LLM models (#152287) Fix torch.compile on some HuggingFace models (#151154) Fix crash due to Exception raised inside torch.autocast (#152503) Improve Error logging in torch.compile (#149831) Mark mutable custom operators as cacheable in torch.compile (#151194) Implement workaround for a graph break with older version einops (#153925) Fix an issue with tensor.view(dtype).copy_(...) (#151598)
Flex Attention
Fix assertion error due to inductor permuting inputs to flex attention (#151959) Fix performance regression on nanogpt speedrun (#152641)
Distributed
Fix extra CUDA context created by barrier (#149144) Fix an issue related to Distributed Fused Adam in Rocm/APEX when using nccl_ub feature (#150010) Add a workaround random hang in non-blocking API mode in NCCL 2.26 (#154055)
MacOS
Fix MacOS compilation error with Clang 17 (#151316) Fix binary kernels produce incorrect results when one of the tensor arguments is from a wrapped scalar on MPS devices (#152997)
Other
Improve PyTorch Wheel size due to introduction of addition of 128 bit vectorization (#148320) (#152396) Fix fmsub function definition (#152075) Fix Floating point exception in torch.mkldnn_max_pool2d (#151848) Fix abnormal inference output with XPU:1 device (#153067) Fix Illegal Instruction Caused by grid_sample on Windows (#152613) Fix ONNX decomposition does not preserve custom CompositeImplicitAutograd ops (#151826) Fix error with dynamic linking of libgomp library (#150084) Fix segfault in profiler with Python 3.13 (#153848)
PyTorch 2.7.0 Release Notes
- Highlights
- Tracked Regressions
- Backwards Incompatible Changes
- Deprecations
- New Features
- Improvements
- Bug fixes
- Performance
- Documentation
- Developers
Highlights
... (truncated)
Commits
e2d141dset thread_work_size to 4 for unrolled kernel (#154541)1214198[c10d] Fix extra CUDA context created by barrier (#152834)790cc2f[c10d] Add more tests to prevent extra context (#154179)62ea99a[CI] Remove the xpu env source for linux binary validate (#154409)941732c[ROCm] Added unit test to test the cuda_pluggable allocator (#154135)769d5da[binary builds] Linux aarch64 CUDA builds. Make sure tag is set correctly (#1...306ba12Fix uint view copy (#151598) (#154121)1ae9953[ROCm] Update CUDAPluggableAllocator.h (#1984) (#153974)4a815edci: Set minimum cmake version for halide build (#154122)4c7314e[Dynamo] Fix einops regression (#154053)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)