devika
devika copied to clipboard
TypeError: OpenAI.__init__() missing 1 required positional argument: 'api_key'
- Serving Flask app 'devika'
- Debug mode: off
Token usage: 322
Exception in thread Thread-348 (
): Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/Users/norbertpapp/Desktop/devika/devika.py", line 49, in target=lambda: Agent(base_model=base_model).execute(prompt, project_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/username/Desktop/devika/src/agents/agent.py", line 264, in execute plan = self.planner.execute(prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/username/Desktop/devika/src/agents/planner/planner.py", line 70, in execute response = self.llm.inference(prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/username/Desktop/devika/src/llm/llm.py", line 60, in inference response = OpenAI().inference(self.model_id, prompt).strip() ^^^^^^^^ TypeError: OpenAI.init() missing 1 required positional argument: 'api_key'
I also have the same issue.
File "/Users/haseeb-mir/Documents/Code/Python/devika/src/llm/llm.py", line 60, in inference
response = OpenAI().inference(self.model_id, prompt).strip()
^^^^^^^^
TypeError: OpenAI.__init__() missing 1 required positional argument: 'api_key'
Same issue here :(
Same issue here, Iv updated the TOML file but still no luck
Hey there! Ive fixed this issue by modifying the OPENAI Client file this is how mine looks.
import openai
openai.api_key = "yourapikey"
class OpenAI:
def inference(self, model_id: str, prompt: str) -> str:
chat_completion = self.client.chat.completions.create(
messages=[
{
"role": "user",
"content": prompt.strip(),
}
],
model=model_id,
)
return chat_completion.choices[0].message.content
```
Doesn't work for me like that, Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1010, in run
self._target(*self._args, **self._kwargs)
File "/Users/norbertpapp/Desktop/devika/devika.py", line 49, in
Same issue for me
Bumped with the same errror while trying to solve other issue, i assume maybe they are yet to add that env api_key thing yet A workaround i found temporarily:
'''
from openai import OpenAI
from src.config import Config
class openAI:
def __init__(self):
config = Config()
api_key = config.get_openai_api_key()
self.client = OpenAI(
api_key=api_key,
)
def inference(self, model_id: str, prompt: str) -> str:
chat_completion = self.client.chat.completions.create(
messages=[
{
"role": "user",
"content": prompt.strip(),
}
],
model=model_id,
)
return chat_completion.choices[0].message.content
'''
IN llm.py
from .openai_client import openAI
response = openAI().inference(self.model_id, prompt).strip()
Is anyone working on PR for this ? or i can also solve this .env file issue here
If its working then please create PR for this
Fixed in #18. Thanks! 🙌