pr-agent icon indicating copy to clipboard operation
pr-agent copied to clipboard

Vertex AI - ModuleNotFoundError: No module named 'vertexai.preview.generative_models'

Open BadLiveware opened this issue 1 year ago • 2 comments

Trying to use this with vertexai running codiumai/pr-agent:0.21-gitlab_webhook

Config excerpt:

[config]
model = "vertex_ai/codechat-bison"
model_turbo = "vertex_ai/codechat-bison"
fallback_models="vertex_ai/codechat-bison"
Failed to generate prediction with vertex_ai/codechat-bison: Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/vertex_ai.py", line 287, in completion
    from vertexai.preview.generative_models import (
ModuleNotFoundError: No module named 'vertexai.preview.generative_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1628, in completion
    model_response = vertex_ai.completion(
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/vertex_ai.py", line 676, in completion
    raise VertexAIError(status_code=500, message=str(e))
litellm.llms.vertex_ai.VertexAIError: No module named 'vertexai.preview.generative_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 277, in acompletion
    init_response = await loop.run_in_executor(None, func_with_context)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2727, in wrapper
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2628, in wrapper
    result = original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 2055, in completion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 8180, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7397, in exception_type
    raise RateLimitError(
litellm.exceptions.RateLimitError: VertexAIException - No module named 'vertexai.preview.generative_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/pr_agent/algo/ai_handlers/litellm_ai_handler.py", line 145, in chat_completion
    response = await acompletion(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 3181, in wrapper_async
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 3017, in wrapper_async
    result = await original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 296, in acompletion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 8180, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7397, in exception_type
    raise RateLimitError(
litellm.exceptions.RateLimitError: VertexAIException - VertexAIException - No module named 'vertexai.preview.generative_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/pr_agent/algo/pr_processing.py", line 272, in retry_with_fallback_models
    return await f(model)
  File "/app/pr_agent/tools/pr_description.py", line 166, in _prepare_prediction
    self.prediction = await self._get_prediction(model)
  File "/app/pr_agent/tools/pr_description.py", line 190, in _get_prediction
    response, finish_reason = await self.ai_handler.chat_completion(
  File "/usr/local/lib/python3.10/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
    return await fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/tenacity/_asyncio.py", line 47, in __call__
    do = self.iter(retry_state=retry_state)
  File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/site-packages/tenacity/_asyncio.py", line 50, in __call__
    result = await fn(*args, **kwargs)
  File "/app/pr_agent/algo/ai_handlers/litellm_ai_handler.py", line 146, in chat_completion
    except (openai.APIError, openai.Timeout) as e:
TypeError: catching classes that do not inherit from BaseException is not allowed


BadLiveware avatar May 17 '24 08:05 BadLiveware

Seems to be a known https://github.com/BerriAI/litellm/issues/1463 and fixed by pip install "google-cloud-aiplatform>=1.38" so a dependency upgrade should fix it

BadLiveware avatar May 17 '24 09:05 BadLiveware

feel free to open a PR to update this dependency

mrT23 avatar May 17 '24 14:05 mrT23