Bump the all-deps group across 1 directory with 5 updates
Bumps the all-deps group with 3 updates in the / directory: github.com/apache/beam/sdks/v2, github.com/usbarmory/tamago and golang.org/x/mod.
Updates github.com/apache/beam/sdks/v2 from 2.68.0 to 2.69.0
Release notes
Sourced from github.com/apache/beam/sdks/v2's releases.
Beam 2.69.0 release
We are happy to present the new 2.69.0 release of Beam. This release includes both improvements and new functionality. See the download page for this release.
For more information on changes in 2.69.0, check out the detailed release notes.
Highlights
I/Os
- Upgraded Iceberg dependency to 1.10.0 (#36123).
New Features / Improvements
- Enhance JAXBCoder with XMLInputFactory support (Java) (#36446).
- Python examples added for CloudSQL enrichment handler on Beam website (Python) (#35473).
- Support for batch mode execution in WriteToPubSub transform added (Python) (#35990).
- Added official support for Python 3.13 (#34869).
- Added an optional output_schema verification to all YAML transforms (#35952).
- Support for encryption when using GroupByKey added, along with
--gbekpipeline option to automatically replace all GroupByKey transforms (Java/Python) (#36214).Breaking Changes
- (Python)
dillis no longer a required, default dependency for Apache Beam (#21298).
- This change only affects pipelines that explicitly use the
pickle_library=dillpipeline option.- While
dill==0.3.1.1is still pre-installed on the official Beam SDK base images, it is no longer a direct dependency of the apache-beam Python package. This means it can be overridden by other dependencies in your environment.- If your pipeline uses
pickle_library=dill, you must manually ensuredill==0.3.1.1is installed in both your submission and runtime environments.
- Submission environment: Install the dill extra in your local environment
pip install apache-beam[gcpdill].- Runtime (worker) environment: Your action depends on how you manage your worker's environment.
- If using default containers or custom containers with the official Beam base image e.g.
FROM apache/beam_python3.10_sdk:2.69.0
- Add
dill==0.3.1.1to your worker's requirements file (e.g., requirements.txt)- Pass this file to your pipeline using the --requirements_file requirements.txt pipeline option (For more details see managing Dataflow dependencies).
- If custom containers with a non-Beam base image e.g.
FROM python:3.9-slim
- Install apache-beam with the dill extra in your docker file e.g.
RUN pip install --no-cache-dir apache-beam[gcp,dill]- If there is a dill version mismatch between submission and runtime environments you might encounter unpickling errors like
Can't get attribute '_create_code' on <module 'dill._dill' from....- If dill is not installed in the runtime environment you will see the error
ImportError: Pipeline option pickle_library=dill is set, but dill is not installed...- Report any issues you encounter when using
pickle_library=dillto the GitHub issue (#21298)- (Python) Added a
pickle_library=dill_unsafepipeline option. This allows overridingdill==0.3.1.1using dill as the pickle_library. Use with extreme caution. Other versions of dill has not been tested with Apache Beam (#21298).- (Python) The deterministic fallback coder for complex types like NamedTuple, Enum, and dataclasses now normalizes filepaths for better determinism guarantees. This affects streaming pipelines updating from 2.68 to 2.69 that utilize this fallback coder. If your pipeline is affected, you may see a warning like: "Using fallback deterministic coder for type X...". To update safely sepcify the pipeline option
--update_compatibility_version=2.68.0(#36345).- (Python) Fixed transform naming conflict when executing DataTransform on a dictionary of PColls (#30445). This may break update compatibility if you don't provide a
--transform_name_mapping.- Removed deprecated Hadoop versions (2.10.2 and 3.2.4) that are no longer supported for Iceberg from IcebergIO (#36282).
- (Go) Coder construction on SDK side is more faithful to the specs from runners without stripping length-prefix. This may break streaming pipeline update as the underlying coder could be changed (#36387).
- Minimum Go version for Beam Go updated to 1.25.2 (#36461).
... (truncated)
Changelog
Sourced from github.com/apache/beam/sdks/v2's changelog.
[2.69.0] - 2025-10-28
Highlights
I/Os
- Upgraded Iceberg dependency to 1.10.0 (#36123).
New Features / Improvements
- Enhance JAXBCoder with XMLInputFactory support (Java) (#36446).
- Python examples added for CloudSQL enrichment handler on Beam website (Python) (#35473).
- Support for batch mode execution in WriteToPubSub transform added (Python) (#35990).
- Added official support for Python 3.13 (#34869).
- Added an optional output_schema verification to all YAML transforms (#35952).
- Support for encryption when using GroupByKey added, along with
--gbekpipeline option to automatically replace all GroupByKey transforms (Java/Python) (#36214).- In Python SDK, the
--element_processing_timeout_minutesoption will also interrupt the SDK process if slowness happens during DoFn initialization, for example inDoFn.setup()(#36518).Breaking Changes
- (Python)
dillis no longer a required, default dependency for Apache Beam (#21298).
- This change only affects pipelines that explicitly use the
pickle_library=dillpipeline option.- While
dill==0.3.1.1is still pre-installed on the official Beam SDK base images, it is no longer a direct dependency of the apache-beam Python package. This means it can be overridden by other dependencies in your environment.- If your pipeline uses
pickle_library=dill, you must manually ensuredill==0.3.1.1is installed in both your submission and runtime environments.
- Submission environment: Install the dill extra in your local environment
pip install apache-beam[gcpdill].- Runtime (worker) environment: Your action depends on how you manage your worker's environment.
- If using default containers or custom containers with the official Beam base image e.g.
FROM apache/beam_python3.10_sdk:2.69.0
- Add
dill==0.3.1.1to your worker's requirements file (e.g., requirements.txt)- Pass this file to your pipeline using the --requirements_file requirements.txt pipeline option (For more details see managing Dataflow dependencies).
- If custom containers with a non-Beam base image e.g.
FROM python:3.9-slim
- Install apache-beam with the dill extra in your docker file e.g.
RUN pip install --no-cache-dir apache-beam[gcp,dill]- If there is a dill version mismatch between submission and runtime environments you might encounter unpickling errors like
Can't get attribute '_create_code' on <module 'dill._dill' from....- If dill is not installed in the runtime environment you will see the error
ImportError: Pipeline option pickle_library=dill is set, but dill is not installed...- Report any issues you encounter when using
pickle_library=dillto the GitHub issue (#21298)- (Python) Added a
pickle_library=dill_unsafepipeline option. This allows overridingdill==0.3.1.1using dill as the pickle_library. Use with extreme caution. Other versions of dill has not been tested with Apache Beam (#21298).- (Python) The deterministic fallback coder for complex types like NamedTuple, Enum, and dataclasses now normalizes filepaths for better determinism guarantees. This affects streaming pipelines updating from 2.68 to 2.69 that utilize this fallback coder. If your pipeline is affected, you may see a warning like: "Using fallback deterministic coder for type X...". To update safely sepcify the pipeline option
--update_compatibility_version=2.68.0(#36345).- (Python) Fixed transform naming conflict when executing DataTransform on a dictionary of PColls (#30445). This may break update compatibility if you don't provide a
--transform_name_mapping.- Removed deprecated Hadoop versions (2.10.2 and 3.2.4) that are no longer supported for Iceberg from IcebergIO (#36282).
- (Go) Coder construction on SDK side is more faithful to the specs from runners without stripping length-prefix. This may break streaming pipeline update as the underlying coder could be changed (#36387).
- Minimum Go version for Beam Go updated to 1.25.2 (#36461).
- (Java) DoFn OutputReceiver now requires implementing a builder method as part of extended metadata support for elements (#34902).
- (Java) Removed ProcessContext outputWindowedValue introduced in 2.68 that allowed setting offset and record Id. Use OutputReceiver's builder to set those field (#36523).
Bugfixes
- Fixed passing of pipeline options to x-lang transforms when called from the Java SDK (Java) (#36443).
... (truncated)
Commits
2ca7d21Set version for 2.69.0 RC3976cbc3Revert "Add GRPC experiments to Python dockerfile (#36525)" (#36573)ecdc49bFix dependency version (#36570)5729522Merge pull request #36567 from Abacn/cp-365647d85f18Fix BigQueryIO File load validate runtime value provider (#36564)c6e2df0CP: Fix publishing of ml/distroless images (#36565)b48d832[release-2.69] Cherrypick #36518 to the branch. (#36551)8739e50Set Dataflow container to release version.ed39503Skip TestTimers_ProcessingTime_Unbounded for spark. (#36527)abf1904Merge pull request #36523: revert outputWindowedValue changes from KafkaIO as...- Additional commits viewable in compare view
Updates github.com/usbarmory/tamago from 1.25.1 to 1.25.4
Release notes
Sourced from github.com/usbarmory/tamago's releases.
v1.25.4
This release includes minor cosmetic changes and is issued to track and verify alignment with tamago-go1.25.4.
Major changes:
- imx6ul: add UART3 and UART4 instances (https://github.com/usbarmory/tamago/commit/af14e71e27628166dd705991c85622d00976ae4c)
- intel/uart, nxp/uart: avoid scheduler starvation when no data is available to read (https://github.com/usbarmory/tamago/commit/f67ba6cc6f3b368c12a8f7b3483d01156b01a003)
Full Changelog: https://github.com/usbarmory/tamago/compare/v1.25.3...v1.25.4
This release requires
GOOS=tamagosupport in the Go distribution with tamago-go1.25.2 or later releases.v1.25.3
This release includes minor cosmetic changes and is issued to track and verify alignment with tamago-go1.25.3.
Full Changelog: https://github.com/usbarmory/tamago/compare/v1.25.2...v1.25.3
This release requires
GOOS=tamagosupport in the Go distribution with tamago-go1.25.2 or later releases.v1.25.2
This release introduces improved CPU idle management support functions for
amd64, extending the pattern enabled by the previous release on single-core to multi-core CPUs:runtime.Idle = func(pollUntil int64) { if pollUntil == 0 { return }cpu.SetAlarm(pollUntil) cpu.WaitInterrupt() cpu.SetAlarm(0)}
Major changes for tamago package API:
- amd64, arm, riscv64:
(*CPU).DefaultIdleGovernornew function to export default CPU idle time management- amd64:
(*CPU).ClearInterruptsnew function to signal end-of-interrupt safely under SMP (https://github.com/usbarmory/tamago/commit/e4346ed4eba633359ce92de38de2a2ee37efca62)- amd64:
(*CPU).EnableInterruptsfunction deprecated in favor of(*CPU).ClearInterrupts(https://github.com/usbarmory/tamago/commit/e4346ed4eba633359ce92de38de2a2ee37efca62)Major changes for tamago package internals:
- amd64: interrupts are now enabled on supplemental cores (APs)
- amd64: IRQ handling implementation improved to prevent SMP race conditions
- amd64: tight loops on register wait are now avoided also under SMP (https://github.com/usbarmory/tamago/commit/c4bd7842477c76a8a1367f46119ac09bbef4da5a)
- amd64: fix page tables setup for correct operation under WSL and Google Cloud KVMs (usbarmory/tamago#53)
Full Changelog: https://github.com/usbarmory/tamago/compare/v1.25.1...v1.25.2
This release requires
GOOS=tamagosupport in the Go distribution, it requires at least tamago-go1.25.2.
Commits
f67ba6cavoid scheduler starvation when no data is available to read on UARTsdf28b70Merge pull request #54 from andrejro/add-imx6ul-uartsaf14e71imx6ul: add UART3 and UART4 configurations559d92atidying65252c0tidyingfdbfb50note Google Compute Engine execution for microvm3f36cd2note Google Compute Engine execution for microvm2ab3a72bump toolchain69948a1Merge pull request #53 from andrejro/fix-amd64-page-tables-initializationa05c1f6amd64: fix 4kB page table initialization- Additional commits viewable in compare view
Updates golang.org/x/mod from 0.29.0 to 0.30.0
Commits
7416265go.mod: update golang.org/x dependencies5517a71all: fix some commentsb6cdd1amodfile: use reflect.TypeFor instead of reflect.TypeOf- See full diff in compare view
Updates golang.org/x/oauth2 from 0.31.0 to 0.32.0
Commits
792c877oauth2: use strings.Builder instead of bytes.Buffer- See full diff in compare view
Updates google.golang.org/grpc from 1.75.1 to 1.76.0
Release notes
Sourced from google.golang.org/grpc's releases.
Release 1.76.0
Dependencies
Bug Fixes
- client: Return status
INTERNALwhen a server sends zero response messages for a unary or client-streaming RPC. (#8523)- client: Fail RPCs with status
INTERNALinstead ofUNKNOWNupon receiving http headers with status 1xx andEND_STREAMflag set. (#8518)
- Special Thanks:
@vinothkumarr227- pick_first: Fix race condition that could cause pick_first to get stuck in
IDLEstate on backend address change. (#8615)New Features
- credentials: Add
credentials/jwtpackage providing file-based JWT PerRPCCredentials (A97). (#8431)
- Special Thanks:
@dimpavloffPerformance Improvements
- client: Improve HTTP/2 header size estimate to reduce re-allocations. (#8547)
- encoding/proto: Avoid redundant message size calculation when marshaling. (#8569)
- Special Thanks:
@rs-unity
Commits
d96c2efChange version to 1.76.0 (#8584)79c553cCherry pick #8610, #8615 to v1.76.x (#8621)0513350client: minor improvements to log messages (#8564)ebaf486credentials: implement file-based JWT Call Credentials (part 1 for A97) (#8431)ca78c90xds/resolver_test: fix flaky test ResolverBadServiceUpdate_NACKedWithoutCache...83bead4internal/buffer: set closed flag when closing channel in the Load method (#8575)0f45079encoding/proto: enable use cached size option (#8569)8420f3ftransport: avoid slice reallocation during header creation (#8547)b36320eRevert "stats/opentelemetry: record retry attempts from clientStream (#8342)"...c122250stats/opentelemetry: record retry attempts from clientStream (#8342)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore <dependency name> major versionwill close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)@dependabot ignore <dependency name> minor versionwill close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)@dependabot ignore <dependency name>will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)@dependabot unignore <dependency name>will remove all of the ignore conditions of the specified dependency@dependabot unignore <dependency name> <ignore condition>will remove the ignore condition of the specified dependency and ignore conditions
/gcbrun