Danny McCormick
Danny McCormick
As a Beam Playground __user__, I want to create a permanent link to the code sample I created in Beam Playground, so I can share this Beam Playground sample outside...
Beam 2.38.0 I encounter the following stacktrace when I try to drain a Dataflow pipeline. During the normal execution the pipeline is flawless, but gets stuck during draining, I don't...
Unfortunately I haven't been able to diagnose the exact issue here or come up with a minimal repro. I just have some code to reproduce in https://github.com/apache/beam/pull/16445. That PR adds...
Currently close happens in processElement which is per-window. If there are many windows firing this can throttle throughput waiting for IO instead of closing in parallel in finishBundle. Imported from...
Imported from Jira [BEAM-13049](https://issues.apache.org/jira/browse/BEAM-13049). Original Jira may contain additional context. Reported by: aromanenko. Subtask of issue #21253
KafkaIO should raise an error if both .withReadCommitted() and .commitOffsetsInFinalize() are used
Read committed tells KafkaIO to only read messages that are already committed which means that committing offsets in finalize is a no-op. Users should be using one or the other...
[https://github.com/apache/beam/blob/fd8546355523f67eaddc22249606fdb982fe4938/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/ConsumerSpEL.java#L180-L198](https://github.com/apache/beam/blob/fd8546355523f67eaddc22249606fdb982fe4938/sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/ConsumerSpEL.java#L180-L198) Right now the 'startReadTime' config for KafkaIO.Read looks up an offset in every topic partition that is newer or equal to that timestamp. The problem is that if we...
Please see Stack Overflow discussion: [https://stackoverflow.com/questions/68125864/transform-node-appliedptransform-was-not-replaced-as-expected-error-with-the-dir](https://stackoverflow.com/questions/68125864/transform-node-appliedptransform-was-not-replaced-as-expected-error-with-the-dir) When I create a GCS source & a Pub Source and try to flatten both, there is an error because of some incompatible transformation...
Some features of the ElasticsearchIO connector[1] can only be verified by testing against a cluster with SSL and user/password authentication enabled. We should provide a way to test that kind...
Currently we only have a sink and the implementation is here: https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigtableio.py For example, we should properly identify various errors codes and retry failed mutations appropriately. Imported from Jira [BEAM-13849](https://issues.apache.org/jira/browse/BEAM-13849)....