243006306
243006306
> 验证后发现是tesla p100显卡太老,出现兼容性问题。 请问你限制是怎么解决的,我也是这个显卡
Hello! I have the same problem root@root:/opt/workspace/ollama$ ollama run deepseek-r1:14b pulling manifest pulling 6e9f90f02bb3... 100% ▕███████████████████████████████▏ 9.0 GB pulling 369ca498f347... 100% ▕███████████████████████████████▏ 387 B pulling 6e4c38e1172f... 100% ▕███████████████████████████████▏ 1.1 KB...
> 其实我想要的是微调后仍然能通过openai_api_server.py支持function call 请问你这问题解决了吗,我使用llama-factory,显卡是4090并采用bf16精度微调工具,也是报错这个
> 我微调后,个人的看法是, 个人 会把 开源的 5+5+5+7+6+4 这样的多边形战士。调整后,多半会变成 7+2+3+5 的多边形战士。调的好的可以做到 10+2+3+5 的多边形战士。 我们这个问题不是说的微调没效果,我们的问题是微调后只要Agent触发函数调用就会报错
That's really great. When will these RAG and Agent frameworks be combined with Langchain4j for an example! I have been following langchain4j and langgraph4j all along.
Is the Java8 version still supported? Currently, many projects still use Java8.
When will this be completed?
Is this feature not included in subsequent releases? I think this feature is very useful with ASR
This is great, looking forward to the launch of this feature, how long will it take for this feature to be available?
I'm really sorry, I really want to contribute my strength to langchain4j, but the project I'm currently responsible for is too heavy, so I don't have much personal free time....