[Bug]: BigQueryIO storage write api streaming dynamic destination conflicts if multiple transforms set same destination key
What happened?
An edge case leading to data corruption:
For StorageApiWriteShardedREcords, We maintain a client pool via a static Map of key as DestinationT type: [1]
If there are multiple BigQueryIO.write both with dynamic destinations, and use the same keys, and get processed at the same time on single worker, the race condition could trigger, making rows writes to wrong table, and if schema mismatch, write fails and keep retrying
[1] https://github.com/apache/beam/blob/028e0eef45a96636f45b36d854e02f4334822763/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StorageApiWritesShardedRecords.java#L551
This can be mitigated if DynamicTestinations is guaranteed to return different destination for different tables to write. We should also document this clearly
Issue Priority
Priority: 3 (minor)
Issue Components
- [ ] Component: Python SDK
- [X] Component: Java SDK
- [ ] Component: Go SDK
- [ ] Component: Typescript SDK
- [ ] Component: IO connector
- [ ] Component: Beam YAML
- [ ] Component: Beam examples
- [ ] Component: Beam playground
- [ ] Component: Beam katas
- [ ] Component: Website
- [ ] Component: Infrastructure
- [ ] Component: Spark Runner
- [ ] Component: Flink Runner
- [ ] Component: Samza Runner
- [ ] Component: Twister2 Runner
- [ ] Component: Hazelcast Jet Runner
- [ ] Component: Google Cloud Dataflow Runner