AutoGPT icon indicating copy to clipboard operation
AutoGPT copied to clipboard

Azure support broken?

Open cnkang opened this issue 2 years ago • 21 comments

⚠️ Search for existing issues first ⚠️

  • [X] I have searched the existing issues, and there is no existing issue for my problem

GPT-3 or GPT-4

  • [ ] I am using Auto-GPT with GPT-3 (GPT-3.5)

Steps to reproduce 🕹

azure.yaml: azure_api_type: azure azure_api_base: https://test.openai.azure.com/ azure_api_version: 2023-03-15-preview azure_model_map: fast_llm_model_deployment_id: "gpt-35-turbo" smart_llm_model_deployment_id: "gpt-4" embedding_model_deployment_id: "emb-ada"

Current behavior 😯

When I run "python -m autogpt", it just broken Welcome back! Would you like me to return to being Entrepreneur-GPT? Continue with the last settings? Name: Entrepreneur-GPT Role: an AI designed to autonomously develop and run businesses with the Goals: ['Increase net worth', 'Grow Twitter Account', 'Develop and manage multiple businesses autonomously'] Continue (y/n): y Using memory of type: LocalCache Using Browser: chrome

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "/data/Auto-GPT/autogpt/main.py", line 50, in main() File "/data/Auto-GPT/autogpt/main.py", line 46, in main agent.start_interaction_loop() File "/data/Auto-GPT/autogpt/agent/agent.py", line 75, in start_interaction_loop assistant_reply = chat_with_ai( ^^^^^^^^^^^^^ File "/data/Auto-GPT/autogpt/chat.py", line 159, in chat_with_ai assistant_reply = create_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Auto-GPT/autogpt/llm_utils.py", line 84, in create_chat_completion deployment_id=CFG.get_azure_deployment_id_for_model(model), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Auto-GPT/autogpt/config/config.py", line 120, in get_azure_deployment_id_for_model return self.azure_model_to_deployment_id_map[ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: list indices must be integers or slices, not str

Expected behavior 🤔

It should works well.

Your prompt 📝

# Paste your prompt here

cnkang avatar Apr 17 '23 14:04 cnkang

git rev-parse HEAD 10cd0f3362ad6c86eefe7fc2a1f276ca49af98fe

cnkang avatar Apr 17 '23 14:04 cnkang

Seems like you are missing the value for "azure_model_map" in your .env.

k-boikov avatar Apr 17 '23 15:04 k-boikov

Seems like you are missing the value for "azure_model_map" in your .env.

But in .env.template it says these should be in .azure.yaml. Quoting: AZURE cleanup azure env as already moved to azure.yaml.template

rasmusaslak avatar Apr 17 '23 15:04 rasmusaslak

Line 133 in /autogpt/config/config.py should read:

AZURE_CONFIG_FILE = os.path.join(os.path.dirname(__file__), "..", "..", "azure.yaml")

The azure.yaml file is two folders up (instead of one).

erikcvisser avatar Apr 17 '23 16:04 erikcvisser

Fixed in https://github.com/Significant-Gravitas/Auto-GPT/pull/2214

k-boikov avatar Apr 17 '23 17:04 k-boikov

still facing this error. I have made the above changes. Looks like the above merge request is not complete.

amit-dingare avatar Apr 18 '23 02:04 amit-dingare

RN, you still need to make the change of adding ".." in line 119 of config.py

longxinzhang avatar Apr 18 '23 11:04 longxinzhang

Erroring in the same way here. To be clear, I tried both the current "stable" 0.2.1 release, and the one from commit: 10cd0f3362ad6c86eefe7fc2a1f276ca49af98fe as detailed above. I then made the path fix as described above. Same error, same line. Would be great to be able to use this with our azure capacity.

Ben-Pattinson avatar Apr 18 '23 11:04 Ben-Pattinson

Update, it's because the base URL isn't getting through. If you edit api_requestor.py and add a pint on line 227, you see: "Connecting to OPENAI: /openai/deployments/ChatGPT/chat/completions?api-version=api-version=2022-12-01" Which is wrong. There is the base url? Trying to debug how the URL is supposed to get through now.

Ben-Pattinson avatar Apr 18 '23 13:04 Ben-Pattinson

Problems so far:

  • The base URL not being passed in
  • the url being wrong (chat/completions should be completions), due to the OBJECT_NAME beign set to "chat.completions" in chat_completion.py
  • The content type not being set to "application/json" on the header in api_requestor (line 438)
  • The api-key not being set on the headers

The format of the conversation in completely different vs GPT3. The json isn't compatible at all. As far as I can see, there is no chance that this would work on GPT3. So unless someone dramatically updates this, don't bother trying with Azure and GPT3.5 only. I was hoping to get this going whilst waiting for GPT4 access to turn up, but this looks less and less likely.

Ben-Pattinson avatar Apr 18 '23 15:04 Ben-Pattinson

changed the ../../ in config.py, the config issue is gone, now a new error: openai.error.InvalidRequestError: Resource not found Using GPT3.5, model map looks like this

azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt4-deployment-id-for-azure"
    embedding_model_deployment_id: "text-embedding-ada-002"

xboxeer avatar Apr 18 '23 15:04 xboxeer

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

Christoph-ModelMe avatar Apr 18 '23 16:04 Christoph-ModelMe

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

Figured out the problem is api version incorrect, seems like it has to be 2023-03-15-preview Now I face another error: openai.error.APIError: Invalid response object from API: '{ "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired." }' (HTTP response code was 401)

azure.yaml:

azure_api_type: azure_ad
azure_api_base: "https://xxxx.openai.azure.com/"
azure_api_version: "2023-03-15-preview"
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-35-turbo" -> I don't have GPT4 access so I change it to gpt 35, won't be called anyway I assume
    embedding_model_deployment_id: "text-embedding-ada-002"

I have setup my openai key in .env image

The key should work as I tested it in other project (senmentic-kernel), don't know why it is not working in the context of AutoGPT

Issues I met can be a book of Azure OpenAI FAQ in AutoGPT I guess:)

xboxeer avatar Apr 18 '23 16:04 xboxeer

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

Figured out the problem is api version incorrect, seems like it has to be 2023-03-15-preview Now I face another error: openai.error.APIError: Invalid response object from API: '{ "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired." }' (HTTP response code was 401)

azure.yaml:

azure_api_type: azure_ad
azure_api_base: "https://xxxx.openai.azure.com/"
azure_api_version: "2023-03-15-preview"
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-35-turbo" -> I don't have GPT4 access so I change it to gpt 35, won't be called anyway I assume
    embedding_model_deployment_id: "text-embedding-ada-002"

I have setup my openai key in .env image

The key should work as I tested it in other project (senmentic-kernel), don't know why it is not working in the context of AutoGPT

Issues I met can be a book of Azure OpenAI FAQ in AutoGPT I guess:)

@xboxeer Change your azure_api_type to "azure" (including quotes) instead of azure_ad Also remove the trailing / on the azure_api_base

ssugar avatar Apr 18 '23 17:04 ssugar

Problems so far:

* The base URL not being passed in

* the url being wrong (chat/completions should be completions), due to the OBJECT_NAME beign set to "chat.completions" in chat_completion.py

* The content type not being set to  "application/json" on the header in api_requestor (line 438)

* The api-key not being set on the headers

The format of the conversation in completely different vs GPT3. The json isn't compatible at all. As far as I can see, there is no chance that this would work on GPT3. So unless someone dramatically updates this, don't bother trying with Azure and GPT3.5 only. I was hoping to get this going whilst waiting for GPT4 access to turn up, but this looks less and less likely.

@Ben-Pattinson I was running into the same issues as you. I then saw @xboxeer comment above noting a need to change the API version to "2023-03-15-preview" and setting the smart_llm_model_deployment_id to the same as the fast_llm_model_deployment_id in the model map (knowing it won't be used anyways) and it started working for me.

ssugar avatar Apr 18 '23 18:04 ssugar

Related https://github.com/Significant-Gravitas/Auto-GPT/pull/2214 Related https://github.com/Significant-Gravitas/Auto-GPT/pull/2437

Allows /autogpt/azure.yaml and /azure.yaml.

primaryobjects avatar Apr 18 '23 21:04 primaryobjects

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

Figured out the problem is api version incorrect, seems like it has to be 2023-03-15-preview Now I face another error: openai.error.APIError: Invalid response object from API: '{ "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired." }' (HTTP response code was 401) azure.yaml:

azure_api_type: azure_ad
azure_api_base: "https://xxxx.openai.azure.com/"
azure_api_version: "2023-03-15-preview"
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-35-turbo" -> I don't have GPT4 access so I change it to gpt 35, won't be called anyway I assume
    embedding_model_deployment_id: "text-embedding-ada-002"

I have setup my openai key in .env image The key should work as I tested it in other project (senmentic-kernel), don't know why it is not working in the context of AutoGPT Issues I met can be a book of Azure OpenAI FAQ in AutoGPT I guess:)

@xboxeer Change your azure_api_type to "azure" (including quotes) instead of azure_ad Also remove the trailing / on the azure_api_base

Awesome! changing azure_api_type to "azure" works for me

xboxeer avatar Apr 20 '23 06:04 xboxeer

Will be interested to hear if you get it going, as the code doesn't seem to support the Azure implementation

Ben-Pattinson avatar Apr 21 '23 08:04 Ben-Pattinson

@Ben-Pattinson with the changes listed above it works for me on Azure

ssugar avatar Apr 21 '23 13:04 ssugar

azure.yaml

azure_credential_config:
    azure_object_id: *********
    azure_tenant_id: *********
    azure_client_id: *********
    azure_password: *********
    azure_scopes:
        - https://cognitiveservices.azure.com/.default

llm_utils.py

    def load_azure_config(self, config_file: str = AZURE_CONFIG_FILE) -> None:
        """
        Loads the configuration parameters for Azure hosting from the specified file
          path as a yaml file.

        Parameters:
            config_file(str): The path to the config yaml file. DEFAULT: "../azure.yaml"

        Returns:
            None
        """
        with open(config_file) as file:
            config_params = yaml.load(file, Loader=yaml.FullLoader)
        self.openai_api_type = config_params.get("azure_api_type") or "azure"
        self.openai_api_base = config_params.get("azure_api_base") or ""
        self.openai_api_version = (
            config_params.get("azure_api_version") or "2023-03-15-preview"
        )
        self.azure_model_to_deployment_id_map = config_params.get("azure_model_map", {})
        if self.openai_api_type == "azure_ad":
            azure_credential_config = config_params.get("azure_credential_config")
            self.openai_api_key = self.get_azure_token(azure_credential_config)

    def get_azure_token(self, azure_credential_config):
        from azure.identity import ClientSecretCredential
        sp_credential = ClientSecretCredential(
            client_id=azure_credential_config.get("azure_client_id"),
            client_secret=azure_credential_config.get("azure_password"),
            tenant_id=azure_credential_config.get("azure_tenant_id"))
        token = sp_credential.get_token(*azure_credential_config.get("azure_scopes"))
        return token.token

taohongxiu avatar Apr 24 '23 07:04 taohongxiu

I would love to be put into contact with your Microsoft rep at Azure. We currently can't run the Azure pathways in our automation for a wide variety of reasons but most of the team also doesn't have keys for testing. Any help you can get us there would be amazing

ntindle avatar Apr 26 '23 16:04 ntindle

The original issue should be resolved in #2351

Pwuts avatar Apr 26 '23 21:04 Pwuts