Free-Auto-GPT icon indicating copy to clipboard operation
Free-Auto-GPT copied to clipboard

ERROR: OpenAI: {"detail":{"message":"Could not parse your authentication token.

Open jocastrocUnal opened this issue 1 year ago • 0 comments

⚠️INSTRUCTIONS:
  • Enter ONE "x" inside the brackets [x] to choose the answer
  • [x] Example
  • [ ] Example2

Have you already searched for your ISSUE among the resolved ones?

  • [ x] Yes, new issue
  • [ ] Yes, but the solution not work for me
  • [ ] No

What version of Python do you have?

  • [x ] Last, Python > 3.11
  • [ ] Python >= 3.8
  • [ ] PIs you have Python<3.8 pease install last version of python

What version of operating system do you have?

  • [ ] Windows
  • [ x] Linux/Ububtu
  • [ ] Mac/OSX

What type of installation did you perform?

  • [ x] pip3 install -r requirements.txt
  • [ ] python3 -m pip install -r requirements.txt
  • [ ] Anaconda
  • [ ] Container on VS

Desktop (please complete the following information):

  • Browser [e.g. chrome] : Crhome
  • Version [e.g. 112] :

Describe the bug I get the follow error

---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:723](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:723), in Chatbot.__check_response(self, response)
    722 try:
--> 723     response.raise_for_status()
    724 except requests.exceptions.HTTPError as ex:

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/requests/models.py:1021](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/requests/models.py:1021), in Response.raise_for_status(self)
   1020 if http_error_msg:
-> 1021     raise HTTPError(http_error_msg, response=self)

HTTPError: 401 Client Error: Unauthorized for url: https://bypass.churchless.tech/conversation

The above exception was the direct cause of the following exception:

Error                                     Traceback (most recent call last)
Cell In[4], line 1
----> 1 print(llm("Hello, how are you?"))

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:429](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:429), in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
    422 if not isinstance(prompt, str):
    423     raise ValueError(
    424         "Argument `prompt` is expected to be a string. Instead found "
    425         f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
    426         "`generate` instead."
    427     )
    428 return (
--> 429     self.generate(
    430         [prompt],
    431         stop=stop,
    432         callbacks=callbacks,
    433         tags=tags,
    434         metadata=metadata,
    435         **kwargs,
    436     )
    437     .generations[0][0]
    438     .text
    439 )

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:281](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:281), in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, **kwargs)
    275         raise ValueError(
    276             "Asked to cache, but no cache found at `langchain.cache`."
    277         )
    278     run_managers = callback_manager.on_llm_start(
    279         dumpd(self), prompts, invocation_params=params, options=options
    280     )
--> 281     output = self._generate_helper(
    282         prompts, stop, run_managers, bool(new_arg_supported), **kwargs
    283     )
    284     return output
    285 if len(missing_prompts) > 0:

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:225](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:225), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    223     for run_manager in run_managers:
    224         run_manager.on_llm_error(e)
--> 225     raise e
    226 flattened_outputs = output.flatten()
    227 for manager, flattened_output in zip(run_managers, flattened_outputs):

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:212](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:212), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    202 def _generate_helper(
    203     self,
    204     prompts: List[str],
   (...)
    208     **kwargs: Any,
    209 ) -> LLMResult:
    210     try:
    211         output = (
--> 212             self._generate(
    213                 prompts,
    214                 stop=stop,
    215                 # TODO: support multiple run managers
    216                 run_manager=run_managers[0] if run_managers else None,
    217                 **kwargs,
    218             )
    219             if new_arg_supported
    220             else self._generate(prompts, stop=stop)
    221         )
    222     except (KeyboardInterrupt, Exception) as e:
    223         for run_manager in run_managers:

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:606](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:606), in LLM._generate(self, prompts, stop, run_manager, **kwargs)
    601 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
    602 for prompt in prompts:
    603     text = (
    604         self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
    605         if new_arg_supported
--> 606         else self._call(prompt, stop=stop, **kwargs)
    607     )
    608     generations.append([Generation(text=text)])
    609 return LLMResult(generations=generations)

File [~/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/FreeLLM/ChatGPTAPI.py:47](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/FreeLLM/ChatGPTAPI.py:47), in ChatGPT._call(self, prompt, stop)
     45 else:
     46     sleep(2)
---> 47     response = self.chatbot(prompt)
     49     self.call += 1
     51 #add to history

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:429](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:429), in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
    422 if not isinstance(prompt, str):
    423     raise ValueError(
    424         "Argument `prompt` is expected to be a string. Instead found "
    425         f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
    426         "`generate` instead."
    427     )
    428 return (
--> 429     self.generate(
    430         [prompt],
    431         stop=stop,
    432         callbacks=callbacks,
    433         tags=tags,
    434         metadata=metadata,
    435         **kwargs,
    436     )
    437     .generations[0][0]
    438     .text
    439 )

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:281](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:281), in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, **kwargs)
    275         raise ValueError(
    276             "Asked to cache, but no cache found at `langchain.cache`."
    277         )
    278     run_managers = callback_manager.on_llm_start(
    279         dumpd(self), prompts, invocation_params=params, options=options
    280     )
--> 281     output = self._generate_helper(
    282         prompts, stop, run_managers, bool(new_arg_supported), **kwargs
    283     )
    284     return output
    285 if len(missing_prompts) > 0:

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:225](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:225), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    223     for run_manager in run_managers:
    224         run_manager.on_llm_error(e)
--> 225     raise e
    226 flattened_outputs = output.flatten()
    227 for manager, flattened_output in zip(run_managers, flattened_outputs):

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:212](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:212), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    202 def _generate_helper(
    203     self,
    204     prompts: List[str],
   (...)
    208     **kwargs: Any,
    209 ) -> LLMResult:
    210     try:
    211         output = (
--> 212             self._generate(
    213                 prompts,
    214                 stop=stop,
    215                 # TODO: support multiple run managers
    216                 run_manager=run_managers[0] if run_managers else None,
    217                 **kwargs,
    218             )
    219             if new_arg_supported
    220             else self._generate(prompts, stop=stop)
    221         )
    222     except (KeyboardInterrupt, Exception) as e:
    223         for run_manager in run_managers:

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:606](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/langchain/llms/base.py:606), in LLM._generate(self, prompts, stop, run_manager, **kwargs)
    601 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
    602 for prompt in prompts:
    603     text = (
    604         self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
    605         if new_arg_supported
--> 606         else self._call(prompt, stop=stop, **kwargs)
    607     )
    608     generations.append([Generation(text=text)])
    609 return LLMResult(generations=generations)

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/gpt4_openai/__init__.py:35](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/gpt4_openai/__init__.py:35), in GPT4OpenAI._call(self, prompt, stop)
     28     self.chatbot = Chatbot({
     29         'access_token': self.token,
     30         'model': self.model,
     31         'plugin_ids': self.plugin_ids
     32         })
     34 response = ""
---> 35 for data in self.chatbot.ask(prompt=prompt,
     36                              auto_continue=self.auto_continue,
     37                              model=self.model):
     38     response = data["message"]
     40 # Add to history

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:610](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:610), in Chatbot.ask(self, prompt, conversation_id, parent_id, model, plugin_ids, auto_continue, timeout, **kwargs)
    581 """Ask a question to the chatbot
    582 Args:
    583     prompt (str): The question
   (...)
    599     }
    600 """
    601 messages = [
    602     {
    603         "id": str(uuid.uuid4()),
   (...)
    607     },
    608 ]
--> 610 yield from self.post_messages(
    611     messages,
    612     conversation_id=conversation_id,
    613     parent_id=parent_id,
    614     plugin_ids=plugin_ids,
    615     model=model,
    616     auto_continue=auto_continue,
    617     timeout=timeout,
    618 )

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:563](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:563), in Chatbot.post_messages(self, messages, conversation_id, parent_id, plugin_ids, model, auto_continue, timeout, **kwargs)
    560 if len(plugin_ids) > 0 and not conversation_id:
    561     data["plugin_ids"] = plugin_ids
--> 563 yield from self.__send_request(
    564     data,
    565     timeout=timeout,
    566     auto_continue=auto_continue,
    567 )

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:398](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:398), in Chatbot.__send_request(self, data, auto_continue, timeout, **kwargs)
    391 self.parent_id_prev_queue.append(pid)
    392 response = self.session.post(
    393     url=f"{self.base_url}conversation",
    394     data=json.dumps(data),
    395     timeout=timeout,
    396     stream=True,
    397 )
--> 398 self.__check_response(response)
    400 finish_details = None
    401 for line in response.iter_lines():
    402     # remove b' and ' at the beginning and end and ignore case

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:91](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:91), in logger..decorator..wrapper(*args, **kwargs)
     84 log.debug(
     85     "Entering %s with args %s and kwargs %s",
     86     func.__name__,
     87     args,
     88     kwargs,
     89 )
     90 start = time.time()
---> 91 out = func(*args, **kwargs)
     92 end = time.time()
     93 if is_timed:

File [~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:730](https://file+.vscode-resource.vscode-cdn.net/home/jncc/Documents/Aprendizaje_Profundo/Repositorios/PUBLIC_REPOS/Free-Auto-GPT/~/miniconda3/envs/online_llm_scraper/lib/python3.11/site-packages/revChatGPT/V1.py:730), in Chatbot.__check_response(self, response)
    724 except requests.exceptions.HTTPError as ex:
    725     error = t.Error(
    726         source="OpenAI",
    727         message=response.text,
    728         code=response.status_code,
    729     )
--> 730     raise error from ex

Error: OpenAI: {"detail":{"message":"Could not parse your authentication token. Please try signing in again.","type":"invalid_request_error","param":null,"code":"invalid_jwt"}} (code: 401)
Please check that the input is correct, or you can resolve this issue by filing an issue
Project URL: https://github.com/acheong08/ChatGPT

Additional context my code is just

from FreeLLM.ChatGPTAPI import ChatGPT

# leer token de un .txt
with open("token_aut.txt", "r") as f:
    token_aut = f.read()

llm = ChatGPT(token = token_aut) #for start new chat

print(llm("Hello, how are you?"))  # <-- ERROR

jocastrocUnal avatar Jul 23 '23 21:07 jocastrocUnal