[BUG] AttributeError: 'NoneType' object has no attribute 'startswith'
Description
Error
File [~/.pyenv/versions/3.11.4/envs/crewai/lib/python3.11/site-packages/tiktoken/model.py:86](http://127.0.0.1:8889/~/.pyenv/versions/3.11.4/envs/crewai/lib/python3.11/site-packages/tiktoken/model.py#line=85), in encoding_name_for_model(model_name)
81 else:
82 # Check if the model matches a known prefix
83 # Prefix matching avoids needing library updates for every model version release
84 # Note that this can match on non-existent models (e.g., gpt-3.5-turbo-FAKE)
85 for model_prefix, model_encoding_name in MODEL_PREFIX_TO_ENCODING.items():
---> 86 if model_name.startswith(model_prefix):
87 return model_encoding_name
89 if encoding_name is None:
AttributeError: 'NoneType' object has no attribute 'startswith'
Code
from dotenv import load_dotenv
from crewai import Agent
from langchain_openai import AzureChatOpenAI
load_dotenv()
azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY"),
openai_api_version=os.environ.get("AZURE_OPENAI_VERSION"),
azure_deployment=os.environ.get("AZURE_OPENAI_DEPLOYMENT")
)
azure_agent = Agent(
role='Example Agent',
goal='Demonstrate custom LLM configuration',
backstory='A diligent explorer of GitHub docs.',
llm=azure_llm,
verbose=True,
)
Steps to Reproduce
None
Expected behavior
None
Screenshots/Code snippets
None
Operating System
macOS Sonoma
Python Version
3.11
crewAI Version
0.51.1
crewAI Tools Version
0.8.3
Virtual Environment
Venv
Evidence
Possible Solution
None
Additional context
None
thanks to @farzad528 on this post: https://github.com/crewAIInc/crewAI/issues/1235
Documents and code of examples need to be updated.
mode should be used to define AzureChatOpenAI, not azure_deployment.
azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY"),
openai_api_version=os.environ.get("AZURE_OPENAI_VERSION"),
model=os.environ.get("AZURE_OPENAI_DEPLOYMENT")
)
@martin-wan01 I was also seeing that issue but am really struggling to connect to the model on azure ai. I've been able to connect using their example using from azure.ai.inference import ChatCompletionsClient but keep getting the error Task output: Agent stopped due to iteration limit or time limit. when using it as an LLM through CrewAi.
I was wondering if you've maybe got a fix for this?
For the model I have tried the variants below with the same result:
- llama3 (for my llama3.1 model)
- llama3.1 (for my llama3.1 model)
- <NAME_OF_LLAMA_3.1_MODEL> (for my phi3.5 custom model)
See below for my code example:
from langchain_openai import AzureChatOpenAI
from crewai import Agent, Task, Crew
import os
azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_key=os.environ["AZURE_OPENAI_API_KEY"],
api_version=os.environ["OPENAI_API_VERSION"],
model="llama3.1"
)
general_agent = Agent(role = "Math Professor",
goal = """Provide the solution to the students that are asking mathematical questions and give them the answer.""",
backstory = """You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""",
allow_delegation = False,
verbose = True,
llm = azure_llm)
task = Task(description="""what is 3 + 5""",
agent = general_agent,
expected_output="A numerical answer.")
crew = Crew(
agents=[general_agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
print(result)
Looks like the issue I was having was that I am using the AzureML through the new Azure AI Studio and trying to connect to a model there. I got it working using the AzureMLChatOnlineEndpoint from langchain. See example in link below:
https://python.langchain.com/v0.1/docs/integrations/chat/azureml_chat_endpoint/
I modified the code and it worked for me: I added the model_name = "gpt-4", parameter for at the end see code below:
llm_azure = AzureChatOpenAI(
api_version="",
azure_deployment="",
azure_endpoint=os.getenv(""),
api_key=os.getenv(""),
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
streaming=True,
model_name = "gpt-4"
)
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
Please open again - https://python.langchain.com/docs/integrations/chat/azure_chat_openai/ is still wrong and causes this issue.
To avoid this issue, you have to set the name of the model.
model = "gpt-4o"
AzureChatOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ.get("AZURE_OPENAI_KEY"),
azure_deployment=model,
api_version=os.environ["AZURE_OPENAI_API_VERSION"],
max_retries=max_retries,
model=model,
)
Without model=model, it doesn't work.
With, it does.