Raunaq Morarka
Raunaq Morarka
https://github.com/trinodb/trino/issues/9359 Implement writing of page level column indexes
Please update the commit message to `Use ImmutableList for descriptor map key in parquet`
`hive.max-partitions-per-scan` could be used by admins to block queries which scan a huge number of partitions. If we're going to allow running such queries now without partition pruning, that totally...
> For cost there's the better option now with query.max-scan-physical-bytes `query.max-scan-physical-bytes` is useful but the problem with it is that the limit will be enforced after the specified amount of...
> I'm trying to backport to 391 and test it. I tried to run the tests few of them were failing, there might be some incompatibility with the 391 Parquet...
> > Thanks, @raunaqmorarka for the detailed response! In that case, as Bloom Filter is being supported by Iceberg+Spark for the Parquet file format, it will be worth supporting Trino...
Fixed by https://github.com/trinodb/trino/pull/13695
Read support has been implemented by https://github.com/trinodb/trino/pull/14428 Write support has not been added yet https://github.com/trinodb/trino/issues/16536
@mewwts unfortunately that detail is not documented yet, you can refer to the relevant code at https://github.com/trinodb/trino/blob/master/lib/trino-parquet/src/main/java/io/trino/parquet/predicate/TupleDomainParquetPredicate.java#L638
https://github.com/trinodb/trino/pull/15742 disables the hadoop parquet MemoryManager, it should fix this problem.