PhiCookBook icon indicating copy to clipboard operation
PhiCookBook copied to clipboard

Azure OpenAI Phi-4-multimodel-instruct: 'auto' tool choice error when using runTools()

Open Junaid-Nazir-828 opened this issue 9 months ago • 1 comments

I recently switched from using GPT-4o to Phi-4-multimodel-instruct in my Next.js application using Azure AI services, but I'm encountering the following error:

BadRequestError: 400 {"object":"error","message":"\"auto\" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set","type":"BadRequestError","param":null,"code":400}

The error occurs when calling the runTools() method, which was working perfectly with GPT-4o. Here's my implementation:

OpenAI Instance Configuration:

export const OpenAIInstance = () => {
  try {
    if (
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_KEY ||
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION ||
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_INSTANCE_NAME
    ) {
      throw new Error(
        "Missing required environment variables for OpenAI instance."
      );
    }
    
    const azureOpenAI = new AzureOpenAI({
      apiKey: process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_KEY,
      apiVersion: process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION,
      baseURL: `https://${process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_INSTANCE_NAME}.openai.azure.com/models/chat/completions?api-version=${process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION}`
    });

    return azureOpenAI;
  } catch (error) {
    console.error(
      "Error initializing OpenAI instance:",
      (error as Error).message
    );
    throw error;
  }
};

Chat API Extension Implementation:

export const ChatApiExtensions = async (props: {
  chatThread: ChatThreadModel;
  userMessage: string;
  history: ChatCompletionMessageParam[];
  extensions: RunnableToolFunction<any>[];
  signal: AbortSignal;
}): Promise<ChatCompletionStreamingRunner> => {
  const { userMessage, history, signal, chatThread, extensions } = props;
  const openAI = OpenAIInstance();
  
  const model = process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_MODEL_NAME;
  if (!model) {
    throw new Error("Model deployment name is not configured");
  }

  const systemMessage = await extensionsSystemMessage(chatThread);
  try {
    return await openAI.beta.chat.completions.runTools(
      {
        model: model,
        stream: true,
        messages: [
          {
            role: "system",
            content: chatThread.personaMessage + "\n" + systemMessage,
          },
          ...history,
          {
            role: "user",
            content: userMessage,
          },
        ],
        tools: extensions,
        temperature: 0.7,
        max_tokens: 4000,
      },
      { 
        signal: signal,
      }
    );
  } catch (error) {
    console.error("Error in ChatApiExtensions:", error);
    throw error;
  }
};

Based on the error message, it seems Phi-4-multimodel-instruct requires additional parameters for tool usage that weren't needed with GPT-4o. I've researched the Azure documentation but haven't found specifics about these flags (--enable-auto-tool-choice and --tool-call-parser).

Has anyone successfully used tools with Phi-4-multimodel-instruct on Azure? How can I modify my code to make this work?

Environment:

  • Next.js (server components)
  • Azure OpenAI service
  • OpenAI Node.js SDK

Junaid-Nazir-828 avatar Mar 23 '25 05:03 Junaid-Nazir-828

Hi @Junaid-Nazir-828 so you need to set a system prompt for Phi-4-mm-instruct see https://github.com/microsoft/PhiCookBook/issues/277

TypeScript Example: Using Tools with Phi-4-MM-Instruct Here's a TypeScript example showing how to use function calling (tools) with Microsoft's Phi-4-MM-Instruct model via Azure OpenAI:

import { OpenAIClient, AzureKeyCredential } from "@azure/openai";

// Tool definitions
const tools = [
  {
    type: "function",
    function: {
      name: "get_weather",
      description: "Get the current weather in a given location",
      parameters: {
        type: "object",
        properties: {
          location: {
            type: "string",
            description: "The city and state, e.g., San Francisco, CA",
          },
          unit: {
            type: "string",
            enum: ["celsius", "fahrenheit"],
            description: "The unit of temperature",
          },
        },
        required: ["location"],
      },
    },
  },
  {
    type: "function",
    function: {
      name: "search_products",
      description: "Search for products in a catalog",
      parameters: {
        type: "object",
        properties: {
          query: {
            type: "string",
            description: "Search query for products",
          },
          category: {
            type: "string",
            description: "Product category filter",
          },
          max_results: {
            type: "integer",
            description: "Maximum number of results to return",
          },
        },
        required: ["query"],
      },
    },
  },
];

// Implement the actual tool functions
async function getWeather(args: { location: string; unit?: string }): Promise<string> {
  // In a real implementation, you would call a weather API here
  const { location, unit = "celsius" } = args;
  console.log(`Getting weather for ${location} in ${unit}`);
  return JSON.stringify({
    location,
    temperature: 22,
    unit,
    condition: "Sunny",
  });
}

async function searchProducts(args: { query: string; category?: string; max_results?: number }): Promise<string> {
  // In a real implementation, you would query a product database
  const { query, category, max_results = 5 } = args;
  console.log(`Searching for ${query} in category ${category || "all"}, limit ${max_results}`);
  return JSON.stringify({
    results: [
      { id: "1", name: "Example Product 1", price: 29.99 },
      { id: "2", name: "Example Product 2", price: 59.99 },
    ],
    total_results: 2,
  });
}

// Set up the Azure OpenAI client
const endpoint = process.env.AZURE_OPENAI_ENDPOINT || "";
const apiKey = process.env.AZURE_OPENAI_API_KEY || "";
const deploymentName = "phi-4-mm-instruct"; // Your deployment name

const client = new OpenAIClient(endpoint, new AzureKeyCredential(apiKey));

async function runConversationWithTools(userMessage: string) {
  try {
    // Step 1: Send the conversation and available tools to the model
    const response = await client.getChatCompletions(
      deploymentName,
      [
        { role: "system", content: "You are a helpful assistant that can use tools." },
        { role: "user", content: userMessage },
      ],
      { tools }
    );

    const message = response.choices[0].message;
    console.log("Response:", message);

    // Step 2: Check if the model wants to call a tool
    if (message.toolCalls && message.toolCalls.length > 0) {
      const toolResponses = [];
      
      for (const toolCall of message.toolCalls) {
        const functionName = toolCall.function.name;
        const functionArgs = JSON.parse(toolCall.function.arguments);
        
        console.log(`Model called function: ${functionName} with args: ${JSON.stringify(functionArgs)}`);
        
        let functionResponse;
        if (functionName === "get_weather") {
          functionResponse = await getWeather(functionArgs);
        } else if (functionName === "search_products") {
          functionResponse = await searchProducts(functionArgs);
        } else {
          functionResponse = "Function not implemented";
        }
        
        toolResponses.push({
          tool_call_id: toolCall.id,
          role: "tool",
          name: functionName,
          content: functionResponse,
        });
      }
      
      // Step 3: Send the function response back to the model
      const secondResponse = await client.getChatCompletions(
        deploymentName,
        [
          { role: "system", content: "You are a helpful assistant that can use tools." },
          { role: "user", content: userMessage },
          message,
          ...toolResponses,
        ]
      );
      
      console.log("Final response:", secondResponse.choices[0].message.content);
      return secondResponse.choices[0].message.content;
    } else {
      return message.content;
    }
  } catch (error) {
    console.error("Error:", error);
    throw error;
  }
}

// Example usage
async function main() {
  // Example 1: Weather query that should trigger the get_weather tool
  const weatherResponse = await runConversationWithTools(
    "What's the weather like in Seattle?"
  );
  console.log("Weather Example Result:", weatherResponse);
  
  // Example 2: Product search query that should trigger the search_products tool
  const searchResponse = await runConversationWithTools(
    "I'm looking for headphones under $100"
  );
  console.log("Product Search Example Result:", searchResponse);
}

main().catch(console.error);

Create a .env file with your Azure OpenAI credentials: AZURE_OPENAI_ENDPOINT=your_endpoint_here AZURE_OPENAI_API_KEY=your_api_key_here

leestott avatar Mar 25 '25 14:03 leestott