Maulik Soneji
Maulik Soneji
As part of this pull request, we optimize the heap and CPU usage by the following: 1. Create ProtoField subclasses only once instead of creating them for each record. This...
The current check for BQInsertErrors in the case where messages are out of streaming insert range relies on the response message from Bigquery API. Since the Bigquery streaming insert ranges...
**Problem** Currently, there is only one offset committer thread that acknowledges the successful consumption back to Kafka. As per beast architecture, the Consumer, BQ Workers, and Acknowledger threads work independently...
Beast process gets hung when it's not able to find GOOGLE_CREDENTIALS. Here is stack trace for the same. ``` java.io.FileNotFoundException: / (No such file or directory) at java.io.FileInputStream.open0(Native Method) at...
Currently, when bigquery throws some error because of rate limiting, the error message shown is: ``` StopEvent{reason='FailureStatus{cause=java.lang.RuntimeException: Push failed, message='null'}', source='BqQueueWorker'} ``` Instead, we can use the message in FailureStatus...
I am trying to use `DirectBigQueryInputFormat` which leverages BigQueryStorage API to fetch records from bigquery. There are around 10 million rows that I am trying to fetch using this API,...
addresses https://issues.apache.org/jira/browse/PARQUET-1885
Please provide some information about how to log the messages that are sent over mesh. Also provide information about what transport layer it uses to send messages