devika icon indicating copy to clipboard operation
devika copied to clipboard

TypeError: OpenAI.__init__() missing 1 required positional argument: 'api_key'

Open NorbiXD opened this issue 1 year ago • 9 comments

  • Serving Flask app 'devika'
  • Debug mode: off Token usage: 322 Exception in thread Thread-348 (): Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/Users/norbertpapp/Desktop/devika/devika.py", line 49, in target=lambda: Agent(base_model=base_model).execute(prompt, project_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/username/Desktop/devika/src/agents/agent.py", line 264, in execute plan = self.planner.execute(prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/username/Desktop/devika/src/agents/planner/planner.py", line 70, in execute response = self.llm.inference(prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/username/Desktop/devika/src/llm/llm.py", line 60, in inference response = OpenAI().inference(self.model_id, prompt).strip() ^^^^^^^^ TypeError: OpenAI.init() missing 1 required positional argument: 'api_key'

NorbiXD avatar Mar 21 '24 23:03 NorbiXD

I also have the same issue.

  File "/Users/haseeb-mir/Documents/Code/Python/devika/src/llm/llm.py", line 60, in inference
    response = OpenAI().inference(self.model_id, prompt).strip()
               ^^^^^^^^
TypeError: OpenAI.__init__() missing 1 required positional argument: 'api_key'

haseeb-heaven avatar Mar 22 '24 07:03 haseeb-heaven

Same issue here :(

j3rry01v avatar Mar 22 '24 07:03 j3rry01v

Same issue here, Iv updated the TOML file but still no luck

augmentedstartups avatar Mar 22 '24 08:03 augmentedstartups

Hey there! Ive fixed this issue by modifying the OPENAI Client file this is how mine looks.

import openai

openai.api_key = "yourapikey"

class OpenAI:
    def inference(self, model_id: str, prompt: str) -> str:
        chat_completion = self.client.chat.completions.create(
            messages=[
                {
                    "role": "user",
                    "content": prompt.strip(),
                }
            ],
            model=model_id,
        )

        return chat_completion.choices[0].message.content
        ```

caedencode avatar Mar 22 '24 09:03 caedencode

Doesn't work for me like that, Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/Users/norbertpapp/Desktop/devika/devika.py", line 49, in target=lambda: Agent(base_model=base_model).execute(prompt, project_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/norbertpapp/Desktop/devika/src/agents/agent.py", line 264, in execute plan = self.planner.execute(prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/norbertpapp/Desktop/devika/src/agents/planner/planner.py", line 70, in execute response = self.llm.inference(prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/norbertpapp/Desktop/devika/src/llm/llm.py", line 60, in inference response = OpenAI().inference(self.model_id, prompt).strip() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/norbertpapp/Desktop/devika/src/llm/openai_client.py", line 7, in inference chat_completion = self.client.chat.completions.create( ^^^^^^^^^^^ AttributeError: 'OpenAI' object has no attribute 'client'

NorbiXD avatar Mar 22 '24 10:03 NorbiXD

Same issue for me

mberman84 avatar Mar 22 '24 17:03 mberman84

Bumped with the same errror while trying to solve other issue, i assume maybe they are yet to add that env api_key thing yet A workaround i found temporarily:

'''

from openai import OpenAI
from src.config import Config
class openAI:

    def __init__(self):
        config = Config()
        api_key = config.get_openai_api_key()
        self.client = OpenAI(
            api_key=api_key,
        )
        
    def inference(self, model_id: str, prompt: str) -> str:
        chat_completion = self.client.chat.completions.create(
            messages=[
                {
                    "role": "user",
                    "content": prompt.strip(),
                }
            ],
            model=model_id,
        )

        return chat_completion.choices[0].message.content

'''

IN llm.py from .openai_client import openAI

response = openAI().inference(self.model_id, prompt).strip()

RohitX0X avatar Mar 22 '24 18:03 RohitX0X

Is anyone working on PR for this ? or i can also solve this .env file issue here

haseeb-heaven avatar Mar 22 '24 19:03 haseeb-heaven

If its working then please create PR for this

haseeb-heaven avatar Mar 22 '24 19:03 haseeb-heaven

Fixed in #18. Thanks! 🙌

mufeedvh avatar Mar 30 '24 05:03 mufeedvh