huleilei
huleilei
Okay, thank you. I'll study it.
Hello, I used the following code to successfully access the volcengine doubao model. How did you use it ``` from __future__ import annotations import daft from daft import col from...
yes,I have tested it and it is supported. My code is: ``` import daft from daft import col from daft.functions import llm_generate import ray ray.init() daft.context.set_runner_ray("ray://127.0.0.1:10001") data = { "query":...
Yes, the function prompt can indeed cover llm_generate. I have also tried using the prompt function to access OpenAI, which is quite useful.
@stayrascal help me review. Thanks.
Add `daft.context.set_planning_config(enable_strict_filter_pushdown=True)` in your code, this issue can be temporarily avoided. The real solution, we are fixing it.
@rchowell I use the latest main branch on GitHub. And I tried using the latest code today and the problem still persists. Thank you for helping me take a look
@universalmind303 Thanks. I have two more questions, could you please help me answer them: 1. Will we support `bfloat16` precision in `daft.DataType.tensor` types in the future? 2. Represent` torch.tensor(dtype=torch.bfloat16) `data...
Thank you for your explanation. The `daft.DataType.Python` provided in Daft indeed offers great convenience. However, regarding the tensor type, I believe there are still some differences between the `daft.DataType.tensor` in...
@kevinzwang Yes, this issue is currently a blocker for our business progress as it results in low resource utilization. I'm very keen to participate in the resolution and improvement of...