crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

[BUG] AttributeError: 'NoneType' object has no attribute 'startswith'

Open martin-wan01 opened this issue 1 year ago • 4 comments

Description

Error

File [~/.pyenv/versions/3.11.4/envs/crewai/lib/python3.11/site-packages/tiktoken/model.py:86](http://127.0.0.1:8889/~/.pyenv/versions/3.11.4/envs/crewai/lib/python3.11/site-packages/tiktoken/model.py#line=85), in encoding_name_for_model(model_name)
     81 else:
     82     # Check if the model matches a known prefix
     83     # Prefix matching avoids needing library updates for every model version release
     84     # Note that this can match on non-existent models (e.g., gpt-3.5-turbo-FAKE)
     85     for model_prefix, model_encoding_name in MODEL_PREFIX_TO_ENCODING.items():
---> 86         if model_name.startswith(model_prefix):
     87             return model_encoding_name
     89 if encoding_name is None:

AttributeError: 'NoneType' object has no attribute 'startswith'

Code

from dotenv import load_dotenv
from crewai import Agent
from langchain_openai import AzureChatOpenAI

load_dotenv()

azure_llm = AzureChatOpenAI(
    azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
    api_key=os.environ.get("AZURE_OPENAI_KEY"),
    openai_api_version=os.environ.get("AZURE_OPENAI_VERSION"),
    azure_deployment=os.environ.get("AZURE_OPENAI_DEPLOYMENT")
)

azure_agent = Agent(
  role='Example Agent',
  goal='Demonstrate custom LLM configuration',
  backstory='A diligent explorer of GitHub docs.',
  llm=azure_llm,
  verbose=True,
)

Steps to Reproduce

None

Expected behavior

None

Screenshots/Code snippets

None

Operating System

macOS Sonoma

Python Version

3.11

crewAI Version

0.51.1

crewAI Tools Version

0.8.3

Virtual Environment

Venv

Evidence

Screenshot 2024-08-22 at 14 45 33 Screenshot 2024-08-22 at 14 45 43

Possible Solution

None

Additional context

None

martin-wan01 avatar Aug 22 '24 06:08 martin-wan01

thanks to @farzad528 on this post: https://github.com/crewAIInc/crewAI/issues/1235

Documents and code of examples need to be updated.

mode should be used to define AzureChatOpenAI, not azure_deployment.

azure_llm = AzureChatOpenAI(
    azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
    api_key=os.environ.get("AZURE_OPENAI_KEY"),
    openai_api_version=os.environ.get("AZURE_OPENAI_VERSION"),
    model=os.environ.get("AZURE_OPENAI_DEPLOYMENT")
)

martin-wan01 avatar Aug 22 '24 06:08 martin-wan01

@martin-wan01 I was also seeing that issue but am really struggling to connect to the model on azure ai. I've been able to connect using their example using from azure.ai.inference import ChatCompletionsClient but keep getting the error Task output: Agent stopped due to iteration limit or time limit. when using it as an LLM through CrewAi.

I was wondering if you've maybe got a fix for this?

For the model I have tried the variants below with the same result:

  • llama3 (for my llama3.1 model)
  • llama3.1 (for my llama3.1 model)
  • <NAME_OF_LLAMA_3.1_MODEL> (for my phi3.5 custom model)

See below for my code example:


from langchain_openai import AzureChatOpenAI
from crewai import Agent, Task, Crew
import os

azure_llm = AzureChatOpenAI(
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
    azure_key=os.environ["AZURE_OPENAI_API_KEY"],
    api_version=os.environ["OPENAI_API_VERSION"],
    model="llama3.1"
)

general_agent = Agent(role = "Math Professor",
                      goal = """Provide the solution to the students that are asking mathematical questions and give them the answer.""",
                      backstory = """You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""",
                      allow_delegation = False,
                      verbose = True,
                      llm = azure_llm)

task = Task(description="""what is 3 + 5""",
             agent = general_agent,
             expected_output="A numerical answer.")

crew = Crew(
            agents=[general_agent],
            tasks=[task],
            verbose=True
        )

result = crew.kickoff()

print(result)

semics-tech avatar Aug 23 '24 15:08 semics-tech

Looks like the issue I was having was that I am using the AzureML through the new Azure AI Studio and trying to connect to a model there. I got it working using the AzureMLChatOnlineEndpoint from langchain. See example in link below:

https://python.langchain.com/v0.1/docs/integrations/chat/azureml_chat_endpoint/

semics-tech avatar Aug 30 '24 06:08 semics-tech

I modified the code and it worked for me: I added the model_name = "gpt-4", parameter for at the end see code below:

llm_azure = AzureChatOpenAI(
    api_version="",
    azure_deployment="",
    azure_endpoint=os.getenv(""),
    api_key=os.getenv(""),
    temperature=0,
    max_tokens=None,
    timeout=None,
    max_retries=2,
    streaming=True,
    model_name = "gpt-4"
)

dheerajvarma24 avatar Sep 07 '24 12:09 dheerajvarma24

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Oct 08 '24 12:10 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Oct 14 '24 12:10 github-actions[bot]

Please open again - https://python.langchain.com/docs/integrations/chat/azure_chat_openai/ is still wrong and causes this issue.

robtaylor avatar Oct 23 '24 12:10 robtaylor

To avoid this issue, you have to set the name of the model.

model = "gpt-4o"
AzureChatOpenAI(
        azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
        api_key=os.environ.get("AZURE_OPENAI_KEY"),
        azure_deployment=model,
        api_version=os.environ["AZURE_OPENAI_API_VERSION"],
        max_retries=max_retries,
        model=model,
)

Without model=model, it doesn't work. With, it does.

jerometerrier avatar May 23 '25 14:05 jerometerrier