holmesgpt icon indicating copy to clipboard operation
holmesgpt copied to clipboard

Getting Error when trying to run holmes command line

Open bhanupratapm02 opened this issue 5 months ago • 1 comments

Hi,

I just started learning, and I was able to install holmesgpt using Cutting Edge (instead of pip used pip3).

pip3 install "https://github.com/robusta-dev/holmesgpt/archive/refs/heads/master.zip"

holmes version HEAD -> master-2d2a5b3

I am trying to run a command but getting below error. Any insights around this is very much helpful.

export OPENAI_API_BASE=https://generativelanguage.googleapis.com/v1beta/models holmes ask --api-key="apikey" --model=gemini-1.5-flash-latest "explain what is AI" and tried below option as well holmes ask --api-key="apikey" --model=openai/gemini-1.5-flash-latest "explain what is AI"

But getting this

verbosity is Verbosity.NORMAL                                                                                                                                                                                                                       main.py:77
User: explain what is AI
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /Users/Library/Python/3.9/lib/python/site-packages/holmes/main.py:266 in ask                                                                                                                                                                       │
│                                                                                                                                                                                                                                                            │
│   263 │   │   prompt += f"\n\nAttached file '{path.absolute()}':\n{f.read()}"                                                                                                                                                                              │
│   264 │   │   console.print(f"[bold yellow]Loading file {path}[/bold yellow]")                                                                                                                                                                             │
│   265 │                                                                                                                                                                                                                                                    │
│ ❱ 266 │   response = ai.call(system_prompt, prompt, post_processing_prompt)                                                                                                                                                                                │
│   267 │                                                                                                                                                                                                                                                    │
│   268 │   if json_output_file:                                                                                                                                                                                                                             │
│   269 │   │   write_json_file(json_output_file, response.model_dump())                                                                                                                                                                                     │
│                                                                                                                                                                                                                                                            │
│ /Users/Library/Python/3.9/lib/python/site-packages/holmes/core/tool_calling_llm.py:120 in call                                                                                                                                                     │
│                                                                                                                                                                                                                                                            │
│   117 │   │   │   tool_choice = NOT_GIVEN if tools == NOT_GIVEN else "auto"                                                                                                                                                                                │
│   118 │   │   │                                                                                                                                                                                                                                            │
│   119 │   │   │   total_tokens = self.count_tokens_for_message(messages)                                                                                                                                                                                   │
│ ❱ 120 │   │   │   max_context_size = self.get_context_window_size()                                                                                                                                                                                        │
│   121 │   │   │   maximum_output_token = self.get_maximum_output_token()                                                                                                                                                                                   │
│   122 │   │   │                                                                                                                                                                                                                                            │
│   123 │   │   │   if (total_tokens + maximum_output_token) > max_context_size:                                                                                                                                                                             │
│                                                                                                                                                                                                                                                            │
│ /Users/Library/Python/3.9/lib/python/site-packages/holmes/core/tool_calling_llm.py:89 in get_context_window_size                                                                                                                                   │
│                                                                                                                                                                                                                                                            │
│    86 │   │   #if not litellm.supports_function_calling(model=model):                                                                                                                                                                                      │
│    87 │   │   #    raise Exception(f"model {model} does not support function calling. You must                                                                                                                                                             │
│    88 │   def get_context_window_size(self) -> int:                                                                                                                                                                                                        │
│ ❱  89 │   │   return litellm.model_cost[self.model]['max_input_tokens']                                                                                                                                                                                    │
│    90 │                                                                                                                                                                                                                                                    │
│    91 │   def count_tokens_for_message(self, messages: list[dict]) -> int:                                                                                                                                                                                 │
│    92 │   │   return litellm.token_counter(model=self.model,                                                                                                                                                                                               │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'openai/gemini-1.5-flash-latest'

bhanupratapm02 avatar Oct 01 '24 20:10 bhanupratapm02