Add support for parallel_tool_calls option when configuring Langchain::Assistant
Is your feature request related to a problem? Please describe. We'd like to enable better control of tool calling when using Langchain::Assistant. Some of the supported LLMs (Anthropic and OpenAI) let you modify whether parallel tool calls ("multiple tool calls") can be made or not. In some use-cases the Assistant must call tools sequentially hence we should be able to toggle that option on the Assistant instance.
Describe the solution you'd like
Similar to tool_choice enable the developer to toggle:
assistant = Langchain::Assistant.new(parallel_tool_calls: true/false, ...)
assistant.parallel_tool_calls = true/false
Tasks
- [x]
Langchain::Assistant::LLM::Adapters::Anthropicsupport - [x]
Langchain::Assistant::LLM::Adapters::OpenAIsupport - [ ]
Langchain::Assistant::LLM::Adapters::GoogleGeminisupport (not currently supported) - [ ]
Langchain::Assistant::LLM::Adapters::MistralAIsupport (not currently supported) - [ ]
Langchain::Assistant::LLM::Adapters::Ollamasupport (not currently supported)
I seems Google Gemini does support parallel function calling, see:
https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#supported_models https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#parallel-samples
I seems Google Gemini does support parallel function calling, see:
https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#supported_models https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#parallel-samples
It does, but there's no way to configure whether functions can be called in parallel or not.
If we pass parallel_tool_calls = false, could we stop the debug output each time telling us that the adapter doesn't support parallel tool calls?
If we pass
parallel_tool_calls = false, could we stop the debug output each time telling us that the adapter doesn't support parallel tool calls?
Have you tried changing the logger level? Something like Langchain.logger.level = Logger::ERROR should work.