crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

Can't connect with LM Studio (0.1.14 still defaulting to OpenAI API, 401 error)

Open DannyVee-stack opened this issue 1 year ago • 6 comments

Great project! I'm really excited to see where this goes.

I want to be able to connect with LM Studio's APIs, so I put this script together quickly

local_model_api_tool.py

`from langchain.tools import tool
from openai import OpenAI

Define the tool
@tool
def local_model_api_tool(input_content: str) -> str:
    """
    This tool interfaces with a local model API using OpenAI client.
    The input is a string, which is sent to the API, and the response is returned.
    """
    # Configure the OpenAI client to use the local server
    client = OpenAI(base_url="http://localhost:8000/v1", api_key="NULL")

    # Sending the request to the local model
    response = client.chat.completions.create(
        model="local-model",  # Model field (unused in this setup)
        messages=[
            {"role": "system", "content": "Always answer in rhymes."},
            {"role": "user", "content": input_content}
        ],
        temperature=0.7,
    )

    # Extracting the message from the response
    return response.choices[0].message
`

Then put it in as a tool for the agent:

`import os
from crewai import Agent, Task, Crew, Process
from local_model_api_tool import local_model_api_tool  # Import the custom tool

os.environ["OPENAI_API_KEY"] = "NULL"

# Define your agents with roles and goals
researcher = Agent(
    role='Researcher',
    goal='Discover new insights',
    backstory="You're a world class researcher working on a major data science company",
    verbose=True,
    allow_delegation=False,
    tools=[local_model_api_tool]  #Tool assignment`
Still getting the 401 error though:

openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: NULL. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}


Any ideas on how to fix this? It'd be great to be able to to test many different models through LM Studio, and even assign a specialized model to different agents. 

Thank you!

DannyVee-stack avatar Jan 02 '24 02:01 DannyVee-stack

I got it working with the code below, however tools seem to be problematic for the agents, I will update if I find a solution:

from crewai import Agent
from langchain.chat_models import ChatOpenAI as OpenAI

# Create an instance of the OpenAIWrapper
local_openai_client = OpenAI(
    base_url="http://localhost:3008/v1", api_key="not-needed", temperature=0.4
)

I am using this to streamline the agent creation but you can use whatever you are comfortable with:


def create_agent(
    role,
    goal,
    backstory,
    verbose=True,
    allow_delegation=False,
    tools=default_tools,
    llm=local_openai_client,
):
    """
    Create an Agent with default settings. Override as needed for specific agents.
    """
    return Agent(
        role=role,
        goal=goal,
        backstory=backstory,
        verbose=verbose,
        allow_delegation=allow_delegation,
        tools=tools,
        llm=llm,
    )

KenichiQaz avatar Jan 02 '24 04:01 KenichiQaz

That worked like a charm! Very much appreciated.

The next question would be whether we could set it up to have multiple distinct clients for different models. That way we could have very specialized agents.

DannyVee-stack avatar Jan 02 '24 05:01 DannyVee-stack

Yup, smaller models seems to have troubles with tools, OpenHermes was the most consistent with my test so far

joaomdmoura avatar Jan 03 '24 16:01 joaomdmoura

I'll keep this open so we update the readme to mention LM Studio

joaomdmoura avatar Jan 03 '24 16:01 joaomdmoura

@KenichiQaz

lib/python3.10/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The class ChatOpenAI was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use langchain_openai.ChatOpenAI instead.

scenaristeur avatar Jan 06 '24 14:01 scenaristeur

I have it working with LM Studio and dolphin 2.2 but needed to include some additional instructions in the task to help the model respond in the correct format to use tools. It is still a bit hit or miss, but working well enough for now.

It's verbose and could use improvement, but I added this to the task description and it helped to get delegation working correctly with local models:

To use a tool (as described in the instructions above), please use the exact following format:

        Thought: Do I need to use a tool? Yes
        Action: [Delegate work to co-worker, Ask question to co-worker]
        Action Input: [coworker name]|['question' or 'task']|[information about the task or question]
        Observation: [full response from the co-worker]
  

For example to ask a the Software Engineer to check the code for best practices:

        Thought: Do I need to use a tool? Yes
        Action: Ask question to co-worker
        Action Input: Senior Software Engineer|question|Check the code for best practices
        Observation:

You may continue to use tools as needed by using the above format. Be sure that the Action Input is formatted correctly with all three values separated by pipes. The three values need to be the name of the co-worker, a single term (either question or task), and the information about the task or question.

Example: "Senior Software Engineer|question|Check the code for best practices"

It is VITAL TO YOUR JOB to use this format when using tools. Not include all three values in your Action Input will cause your job to FAIL!!

DO NOT include "Final Answer" in your response until you are done using tools.

itlackey avatar Jan 06 '24 21:01 itlackey

@itlackey is actually putting together a PR adding docs around this to make it easier on everyone <3

joaomdmoura avatar Jan 21 '24 03:01 joaomdmoura

this is duplicate of https://github.com/joaomdmoura/crewAI/issues/74 so closing this one

joaomdmoura avatar Jan 21 '24 03:01 joaomdmoura