Andy Grove

Results 657 comments of Andy Grove

This could also test whether each expression was actually accelerated by Comet or fell back to Spark

Thanks @palaska. This looks great. I wasn't able to run any queries though with these changes. I built with `cargo build --release`. I started the scheduler: ``` $ ./target/release/ballista-scheduler 2024-09-25T12:57:29.760309Z...

The scheduler receives this request from the executor: ``` Received poll_work request for ExecutorRegistration { id: "b81acaa8-2fd8-400d-aa4c-3faea28b60ed", port: 50051, grpc_port: 50052, specification: Some(ExecutorSpecification { resources: [ExecutorResource { resource: Some(TaskSlots(8)) }]...

I managed to get it working. In the scheduler `poll_work` method, you need to change .. ``` let remote_addr = request.remote_addr(); ``` to ``` let remote_addr = request .extensions() .get::()...

I see that you pushed that fix while I was typing that!

I have been testing with TPC-H, and many queries work. However, some queries, such as queries 2, 7, and 8, never complete, and I do not see any errors logged....

> I've just looked at this, I think the never completing queries just try to return too many rows and they take too long. That is because the logical optimizer...

Would it make sense just to upgrade to DF 41 in this PR?

I'm fine with pinning to a revision of DataFusion once your PR is merged over there.

This upgrade has now happened in another PR. Thanks for starting this @palaska