langchainrb icon indicating copy to clipboard operation
langchainrb copied to clipboard

Add support for parallel_tool_calls option when configuring Langchain::Assistant

Open andreibondarev opened this issue 1 year ago • 4 comments

Is your feature request related to a problem? Please describe. We'd like to enable better control of tool calling when using Langchain::Assistant. Some of the supported LLMs (Anthropic and OpenAI) let you modify whether parallel tool calls ("multiple tool calls") can be made or not. In some use-cases the Assistant must call tools sequentially hence we should be able to toggle that option on the Assistant instance.

Describe the solution you'd like Similar to tool_choice enable the developer to toggle:

assistant = Langchain::Assistant.new(parallel_tool_calls: true/false, ...)
assistant.parallel_tool_calls = true/false

Tasks

  • [x] Langchain::Assistant::LLM::Adapters::Anthropic support
  • [x] Langchain::Assistant::LLM::Adapters::OpenAI support
  • [ ] Langchain::Assistant::LLM::Adapters::GoogleGemini support (not currently supported)
  • [ ] Langchain::Assistant::LLM::Adapters::MistralAI support (not currently supported)
  • [ ] Langchain::Assistant::LLM::Adapters::Ollama support (not currently supported)

andreibondarev avatar Oct 04 '24 20:10 andreibondarev

I seems Google Gemini does support parallel function calling, see:

https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#supported_models https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#parallel-samples

sergiobayona avatar Nov 12 '24 14:11 sergiobayona

I seems Google Gemini does support parallel function calling, see:

https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#supported_models https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#parallel-samples

It does, but there's no way to configure whether functions can be called in parallel or not.

andreibondarev avatar Nov 12 '24 18:11 andreibondarev

If we pass parallel_tool_calls = false, could we stop the debug output each time telling us that the adapter doesn't support parallel tool calls?

ms-ati avatar Jan 10 '25 17:01 ms-ati

If we pass parallel_tool_calls = false, could we stop the debug output each time telling us that the adapter doesn't support parallel tool calls?

Have you tried changing the logger level? Something like Langchain.logger.level = Logger::ERROR should work.

andreibondarev avatar Jan 13 '25 16:01 andreibondarev