Ahmed Abualsaud
Ahmed Abualsaud
Run PostCommit_Java_DataflowV2
Run PostCommit_Java_DataflowV2
Yup, left a comment in #22543
The write behaves pretty strangely. Here's what I found: The values do end up being written into BQ but a new table is created for each value:  This is...
Looks like the user-specified `DestinationT` type is first replaced with `TableDestination` during the write [here](https://github.com/apache/beam/blob/04f49848d4b037a5935036927e54c8eb8ed8c361/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BatchLoads.java#L711-L712) (`DestinationT` in, `TableDestination` out). The `WriteTables` DoFn [returns `TableDestination` types](https://github.com/apache/beam/blob/04f49848d4b037a5935036927e54c8eb8ed8c361/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L364) instead of `DestinationT`, which is...
With the changes in #22624 I was able to write successfully with the following pipeline: ``` import com.google.api.services.bigquery.model.TableFieldSchema; import com.google.api.services.bigquery.model.TableRow; import com.google.api.services.bigquery.model.TableSchema; import java.util.Arrays; import org.apache.beam.sdk.Pipeline; import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO; import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition;...
R: @johnjcasey R: @chamikaramj
Run Python PreCommit