BInwei Yang
BInwei Yang
It may depends on the native library. In theory we can implement a memory pool then every native engine use this pool to allocate large memory block. The pool register...
We already implemented the memory allocation interface. Every memory allocation needs to check if Spark has enough memory for this task thread. Now both Arrow and Velox memory are allocated...
we use this one: https://github.com/databricks/tpch-dbgen/commits/master
Velox already has libhdfs support, but the problem is that the we either needs to put all dependency libraries in jar or we need to manually install them on each...
@zhztheplayer Do we still need the PR?
@KevinyhZou FYI, Velox redesigned their parquet reader and we have add it to Gluten. You can test parquet file format now.
Is it solved after remove the child-prc proxy?
Should we modify the doc here? https://github.com/oap-project/gluten/blob/main/docs/Velox.md. What's the apt command you used?
parquet is supported now. You may try parquet file directly.