Rajeshbabu Chintaguntla
Rajeshbabu Chintaguntla
Coral can be used to not just convert the queries between the popular query engines like spark, hive and trino can be used get the optimized plans by applying all...
Currently conntroller and servers able to start with s3a path but while creating the segments during ingestion facing following error. The reason is while preparing file names we are prefixing...
Currently between spark 2.x and spark 3.x batch ingestion lot of code is duplicated other than SparkSegmentMetadataPushJobRunner class. This ticket is to refactor the code to eliminate the duplicate code...
As mentioned here https://github.com/apache/pinot/pull/14015#discussion_r1796831201 just raising ticket track cleaning and sorting the dependencies in the pom files.
Currently some of the the job specification properties are defined at each implementation of batch ingestion like Hadoop job runner has defined the job spec constants in it's corresponding implementation:...
As part of https://github.com/apache/pinot/pull/8048 PR the hadoop and spark plugins moved to plugins-external directory during the assembly. This change is not documented. Would be better to update the documentation accordingly....
zookeeper browser always giving blank page. When I dig deeper found the following error. `main.js:17441 Uncaught (in promise) TypeError: Cannot read properties of null (reading 'numChildren') at main.js:17441:59 at Array.map...