generative-ai icon indicating copy to clipboard operation
generative-ai copied to clipboard

ValueError: Content has no parts.

Open isidor131908 opened this issue 1 year ago • 1 comments

Hello , i successfully run the intro_multimodal_rag example, but when i tried my own pdf I encountered the following error,


ValueError                                Traceback (most recent call last)
Cell In[37], line 11
      5 image_description_prompt = """Explain what is going on in the image.
      6 If it's a table, extract all elements of the table.
      7 If it's a graph, explain the findings in the graph.
      8 Do not include any numbers that are not mentioned in the image:"""
     10 # Extract text and image metadata from the PDF document
---> 11 text_metadata_df, image_metadata_df = get_document_metadata(
     12     PROJECT_ID,
     13     model,
     14     pdf_path,
     15     image_save_dir="images_telys",
     16     image_description_prompt=image_description_prompt,
     17     embedding_size=1408,
     18     text_emb_text_limit=1000,  # Set text embedding input text limit to 1000 char
     19 )
     21 print("--- Completed processing. ---")

File ~/utils/intro_multimodal_rag_utils.py:572, in get_document_metadata(project_id, generative_multimodal_model, pdf_path, image_save_dir, image_description_prompt, embedding_size, text_emb_text_limit)
    566 image_for_gemini, image_name = get_image_for_gemini(
    567     doc, image, image_no, image_save_dir, file_name, page_num
    568 )
    570 print(f"Extracting image from page: {page_num + 1}, saved as: {image_name}")
--> 572 response = get_gemini_response(
    573     generative_multimodal_model,
    574     model_input=[image_description_prompt, image_for_gemini],
    575     stream=True,
    576 )
    578 image_embedding_with_description = (
    579     get_image_embedding_from_multimodal_embedding_model(
    580         project_id=project_id,
   (...)
    584     )
    585 )
    587 image_embedding = get_image_embedding_from_multimodal_embedding_model(
    588     project_id=project_id,
    589     image_uri=image_name,
    590     embedding_size=embedding_size,
    591 )

File ~/utils/intro_multimodal_rag_utils.py:413, in get_gemini_response(generative_multimodal_model, model_input, stream)
    411     response_list = []
    412     for chunk in response:
--> 413         response_list.append(chunk.text)
    414     response = "".join(response_list)
    415 else:

File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1315, in GenerationResponse.text(self)
   1313 if len(self.candidates) > 1:
   1314     raise ValueError("Multiple candidates are not supported")
-> 1315 return self.candidates[0].text

File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1368, in Candidate.text(self)
   1366 @property
   1367 def text(self) -> str:
-> 1368     return self.content.text

File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1425, in Content.text(self)
   1423     raise ValueError("Multiple content parts are not supported.")
   1424 if not self.parts:
-> 1425     raise ValueError("Content has no parts.")
   1426 return self.parts[0].text

ValueError: Content has no parts.

, any suggestion?

isidor131908 avatar Jan 12 '24 19:01 isidor131908

Did the changes in PR #343 fix this issue?

holtskinner avatar Jan 17 '24 14:01 holtskinner

getting the same error "ValueError: Content has no parts." on version google-cloud-aiplatform 1.40.0 today

UmerQam avatar Feb 01 '24 17:02 UmerQam

getting the same error "ValueError: Content has no parts." on version google-cloud-aiplatform 1.40.0 today

Same here also. I am not sure why it is randomly generating this error.

mk-hasan avatar Feb 05 '24 14:02 mk-hasan

All of a sudden I am getting "ValueError: Content has no parts" on same document which worked perfectly fine before.

BhushanGarware avatar Feb 08 '24 14:02 BhushanGarware

Getting the same error on version google-cloud-aiplatform==1.41.0

d116626 avatar Feb 10 '24 16:02 d116626

Getting the same error:

File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\vertexai\generative_models_generative_models.py", line 1299, in text return self.candidates[0].text File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\vertexai\generative_models_generative_models.py", line 1352, in text return self.content.text File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\vertexai\generative_models_generative_models.py", line 1409, in text raise ValueError("Content has no parts.") ValueError: Content has no parts.

chrissie303 avatar Feb 10 '24 22:02 chrissie303

Same error here when I was trying to extract text from the response answer of the model. (answer.text)

But the problem can be weridly solved by deleting .text and retyping it 😂

TTTTao725 avatar Feb 12 '24 16:02 TTTTao725

my script:

vertexai.init(project='****', location='us-central1')

gemini_pro_model = GenerativeModel("gemini-pro")
answer = gemini_pro_model.generate_content("Now I am going to give you a molecule in SMILES format, as well as its caption (description), I want you to rewrite the caption into five different versions. SMILES: CN(C(=O)N)N=O, Caption: The molecule is a member of the class of N-nitrosoureas that is urea in which one of the nitrogens is substituted by methyl and nitroso groups. It has a role as a carcinogenic agent, a mutagen, a teratogenic agent and an alkylating agent. Format your output as a python list, for example, you should output something like [\"caption1\", \"caption2\", \"caption3\", \"caption4\", \"caption5\",] Do not use ```python``` in your answer.")
print(answer)

And these are the 2 executions of the script above:

candidates {
  content {
    role: "model"
  }
  finish_reason: SAFETY
  safety_ratings {
    category: HARM_CATEGORY_HATE_SPEECH
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_DANGEROUS_CONTENT
    probability: MEDIUM
    blocked: true
  }
  safety_ratings {
    category: HARM_CATEGORY_HARASSMENT
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_SEXUALLY_EXPLICIT
    probability: NEGLIGIBLE
  }
}
usage_metadata {
  prompt_token_count: 155
  total_token_count: 155
}
candidates {
  content {
    role: "model"
    parts {
      text: "[\"This molecule belongs to the N-nitrosoureas class, characterized by a urea structure with one nitrogen substituted by methyl and nitroso groups.\", \"A member of the N-nitrosoureas, the molecule is essentially urea with one of its nitrogens replaced by methyl and nitroso groups.\", \"This N-nitrosourea derivative features a urea core where one nitrogen atom has been replaced with a methyl group and a nitroso group.\", \"The molecule in question is a member of the N-nitrosoureas class, which are urea derivatives with one nitrogen substituted by methyl and nitroso groups.\", \"Belonging to the N-nitrosourea class, this molecule\'s structure resembles urea with one nitrogen being replaced by methyl and nitroso groups.\"]"
    }
  }
  finish_reason: STOP
  safety_ratings {
    category: HARM_CATEGORY_HATE_SPEECH
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_DANGEROUS_CONTENT
    probability: LOW
  }
  safety_ratings {
    category: HARM_CATEGORY_HARASSMENT
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_SEXUALLY_EXPLICIT
    probability: NEGLIGIBLE
  }
}
usage_metadata {
  prompt_token_count: 155
  candidates_token_count: 157
  total_token_count: 312
}

TTTTao725 avatar Feb 13 '24 08:02 TTTTao725

The reason for the termination of the first execution is HARM_CATEGORY_DANGEROUS_CONTENT, so that's the reason why we have nothing returned: it got blocked!!

Therefore, you can set your safety configuration to BLOCK_NONE:

safety_config = {
    generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
}

answer = gemini_pro_model.generate_content("Now I am going to give you a molecule in SMILES format, as well as its caption (description), I want you to rewrite the caption into five different versions. SMILES: CN(C(=O)N)N=O, Caption: The molecule is a member of the class of N-nitrosoureas that is urea in which one of the nitrogens is substituted by methyl and nitroso groups. It has a role as a carcinogenic agent, a mutagen, a teratogenic agent and an alkylating agent. Format your output as a python list, for example, you should output something like [\"caption1\", \"caption2\", \"caption3\", \"caption4\", \"caption5\",] Do not use ```python``` in your answer.", safety_settings=safety_config)

You will not have this problem anymore :)

https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/configure-safety-attributes

TTTTao725 avatar Feb 13 '24 09:02 TTTTao725

I tried catching the ValueError: Content has no parts. by adding this:

try: 
        prediction = model.generate_content(prompt,
                    generation_config=GENERATION_CONFIG,
                )
        logging.info(f"Prediction_text: {prediction.text}")
        return prediction
except ValueError as e:
        logging.error(f"Something went wrong with the API call: {e}")
        # If the response doesn't contain text, check if the prompt was blocked.
        logging.error(prediction.prompt_feedback)
        # Also check the finish reason to see if the response was blocked.
        logging.error(prediction.candidates[0].finish_reason)
        # If the finish reason was SAFETY, the safety ratings have more details.
        logging.error(prediction.candidates[0].safety_ratings)
        raise Exception(f"Something went wrong with the API call: {e}")

but this gave me: AttributeError: 'GenerationResponse' object has no attribute 'prompt_feedback'

harsh-singh-oxb avatar Feb 15 '24 09:02 harsh-singh-oxb

The issue here is that Gemini is blocking some of the text. It can be offset by setting the safety thresholds to None, as pointed out by @TTTTao725. Working on a PR to fix this issue.

lavinigam-gcp avatar Feb 15 '24 10:02 lavinigam-gcp

If anyone wants to unblock themselves before we raise the PR, you can change the following function [get_gemini_response] in the utils here:

gemini/use-cases/retrieval-augmented-generation/utils/intro_multimodal_rag_utils.py

This will keep the safety settings to the lowest: None, and hence will not block anything. More info here: configure-safety-attributes

Code:

def get_gemini_response(
    generative_multimodal_model,
    model_input: List[str],
    stream: bool = True,
    generation_config: Optional[dict] = {"max_output_tokens": 2048, "temperature": 0.2},
    safety_settings: Optional[dict] = {
                                        HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
                                    },

) -> str:
    """
    This function generates text in response to a list of model inputs.

    Args:
        model_input: A list of strings representing the inputs to the model.
        stream: Whether to generate the response in a streaming fashion (returning chunks of text at a time) or all at once. Defaults to False.

    Returns:
        The generated text as a string.
    """

    # generation_config = {"max_output_tokens": 2048, "temperature": 0.1}
    # print(generation_config)
    # print(safety_settings)

    if stream:
        response = generative_multimodal_model.generate_content(
            model_input,
            generation_config=generation_config,
            stream=stream,
            safety_settings=safety_settings,
        )
        response_list = []

        for chunk in response:
            try:
              response_list.append(chunk.text)
            except Exception as e:
              print("Exception occured while calling gemini. Something is wrong. Lower the safety thresholds [safety_settings: BLOCK_NONE ] if not already done. -----", e)
              response_list.append("Exception occured")
              continue
        response = "".join(response_list)
    else:
        response = generative_multimodal_model.generate_content(
            model_input, generation_config=generation_config
        )
        response = response.candidates[0].content.parts[0].text

    return response

Let me know if this resolves the issue: "ValueError: Content has no parts."

lavinigam-gcp avatar Feb 15 '24 10:02 lavinigam-gcp

@lavinigam-gcp, there could be something else at play here because even with safety_settings set to BLOCK_NONE, I get FinishReason.OTHER as a response with:

response.candidates[0].finish_reason

harsh-singh-oxb avatar Feb 15 '24 13:02 harsh-singh-oxb

Thanks for testing it out, Harsh. Would it be possible for you to share the document you are working on? and can you share the complete trace of the error?

lavinigam-gcp avatar Feb 15 '24 14:02 lavinigam-gcp

Hey @lavinigam-gcp, what's the best way of sharing the .log file? I will have to remove the sensitive details from it. Are they any other attributes from response.candidates[0] that can be helpful?

Did some more digging and found that the response is kind of empty:

2024-02-15 15:53:16 - INFO - Prediction: candidates {
  content {
    role: "model"
  }
  finish_reason: OTHER
}
usage_metadata {
  prompt_token_count: 356
  total_token_count: 356
}

harsh-singh-oxb avatar Feb 15 '24 15:02 harsh-singh-oxb

I added safety_settings but the problem is still there.

safety_settings: Optional[dict] = {
                                        HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
                                    },

hasattr(response,"text")
                ValueError response:
                candidates {
                content {
                    role: "model"
                }
                finish_reason: OTHER
                }

takeruh avatar Feb 15 '24 21:02 takeruh

Ya, same error as me. I added a retry block in my code to call the API 5 times but it just fails.

Since the API call was not returning anything, I added a delay in my API calls. The delay works in a way that I am able to make more API calls now but it still returns the same error. This makes me wonder if it is related to some quota limit/ rate limit even though the Cloud console is not reporting any over usage. image

harsh-singh-oxb avatar Feb 16 '24 18:02 harsh-singh-oxb

https://console.cloud.google.com/vertex-ai/generative/language/create/text?authuser=0 The vertex-ai console also blocks. The blocks are not random and seem to depend on the keywords included in the output.

convert the company below into the official English version.

Lihit Lab Inc

Response

Response blocked for unknown reason. Try rewriting the prompt.

takeruh avatar Feb 20 '24 19:02 takeruh

@lavinigam-gcp, there could be something else at play here because even with safety_settings set to BLOCK_NONE, I get FinishReason.OTHER as a response with:

response.candidates[0].finish_reason

I am experiencing the same issue. Why is this issue closed?

zafercavdar avatar Feb 24 '24 00:02 zafercavdar

@zafercavdar We have updated the mRAG notebook to resolve the issue with the content block. Are you still facing the issue with the updated code? Also, if there's a way you can share description of the document that you are running (or share the doc, if possible) to reproduce the error?

lavinigam-gcp avatar Feb 24 '24 05:02 lavinigam-gcp

I have re-opened the issue for now.

Could you please share a reproducible example so that I can try to help debug what might be happening exactly? This will help us be able to try to resolve the issue more quickly.

Thanks!

cc @polong-lin @holtskinner

lavinigam-gcp avatar Feb 24 '24 05:02 lavinigam-gcp

Hi @lavinigam-gcp,

I don't directly use the RAG notebook. I am running benchmarking on Gemini Pro using the Python Client library.

Python version: 3.8 google-cloud-aiplatform version: 1.38.1

Here is my prompt (ie contents parameter value):

Pro-inflammatory cytokines play a crucial role in the etiology of atopic dermatitis. We demonstrated that Herba Epimedii has anti-inflammatory potential in an atopic dermatitis mouse model; however, limited research has been conducted on the anti-inflammatory effects and mechanism of icariin, the major active ingredient in Herba Epimedii, in human keratinocytes. In this study, we evaluated the anti-inflammatory potential and mechanisms of icariin in the tumor necrosis factor-alpha (TNF-alpha)/interferon-gamma (IFN-gamma)-induced inflammatory response in human keratinocytes (HaCaT cells) by observing these cells in the presence or absence of icariin. We measured IL-6, IL-8, IL-1 beta, MCP-1 and GRO-alpha production by ELISA; IL-6, IL-8, IL-1 beta, intercellular adhesion molecule-1 (ICAM-1) and tachykinin receptor 1 (TACR1) mRNA expression by real-time PCR; and P38-MAPK, P-ERK and P-JNK signaling expression by western blot in TNF-alpha/IFN-gamma-stimulated HaCaT cells before and after icariin treatment. The expression of INF-alpha-R1 and IFN-gamma-R1 during the stimulation of the cell models was also evaluated before and after icariin treatment. We investigated the effect of icariin on these pro-inflammatory cytokines and detected whether this effect occurred via the mitogen-activated protein kinase (MAPK) signal transduction pathways. We further specifically inhibited the activity of two kinases with 20 mu M SB203580 (a p38 kinase inhibitor) and 50 mu M PD98059 (an ERK1/2 kinase inhibitor) to determine the roles of the two signal pathways involved in the inflammatory response. We found that icariin inhibited TNF-alpha/IFN-gamma-induced IL-6, IL-8, IL-1 beta, and MCP-1 production in a dose-dependent manner; meanwhile, the icariin treatment inhibited the gene expression of IL-8, IL-1 beta, ICAM-1 and TACR1 in HaCaT cells in a time- and dose-dependent manner. Icariin treatment resulted in a reduced expression of p-P38 and p-ERK signal activation induced by TNF-alpha/IFN-gamma; however, only SB203580, the p38 alpha/beta inhibitor, inhibited the secretion of inflammatory cytokines induced by TNF-alpha/IFN-gamma in cultured HaCaT cells. The differential expression of TNF-alpha-R1 and IFN-gamma-R1 was also observed after the stimulation of TNF-alpha/IFN-gamma, which was significantly normalized after the icariin treatment. Collectively, we illustrated the anti-inflammatory property of icariin in human keratinocytes. These effects were mediated, at least partially, via the inhibition of substance P and the p38-MAPK signaling pathway, as well as by the regulation of the TNF-alpha-R1 and IFN-gamma-R1 signals. (C) 2015 Elsevier B.V. All rights reserved.

Generate a title for the given scientific paper above.

Other parameters:

generation_config = {
   "model": "gemini-1.0-pro-001",
   "max_output_tokens": 50,
   "top_p": 0.99,
   "temperature": 0.2,
   "candidate_count": 1,
}

safety_settings: {
                        HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_ONLY_HIGH,
                        HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,
                        HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_ONLY_HIGH, 
                        HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
                        HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_ONLY_HIGH, 
}

and stream=False

This API calls returns "FinishReason.OTHER" finish reason and response.text raises ValueError: Content has no parts

zafercavdar avatar Feb 26 '24 13:02 zafercavdar

Hi @zafercavdar, I am able to reproduce your issues here:

Thank you for bring our attention to it. Our internal teams are looking into it. Allow us some time to respond to it.

lavinigam-gcp avatar Feb 27 '24 08:02 lavinigam-gcp

https://console.cloud.google.com/vertex-ai/generative/language/create/text?authuser=0 The vertex-ai console also blocks. The blocks are not random and seem to depend on the keywords included in the output. convert the company below into the official English version.

Lihit Lab Inc

Response

Response blocked for unknown reason. Try rewriting the prompt.

Hi @takeruh, I am also able to reproduce your issue here.

This is currently a bug and as stated in the previous comment, out teams are looking into the issue.

lavinigam-gcp avatar Feb 27 '24 08:02 lavinigam-gcp

Any updates or a timeline on that? The behaviour is non-deterministic. Even with temperature set to 0.0 the request / response is blocked randomly when using the exactly same prompt. Thats very annoying when trying to build up reliable applications based on Gemini's api. As soon as we lower the temperature below 0.5 the probability of getting rejected by Gemini's API increases. Safety settings are all set to BLOCK_NONE and the lastest version of google-cloud-aiplatform (1.44.0) is in use.

nmoell avatar Mar 22 '24 09:03 nmoell

@nmoell Thank you for raising the issue and being patient with it. I have again escalated the issue internally, and will report back as soon as I get an update. In the meantime, would it be possible for you to share any reproducible prompts where you have been observing the issue? Or if you can print the response object and share the “finish_reason” value so that we know what is actually causing the issue.

lavinigam-gcp avatar Mar 22 '24 10:03 lavinigam-gcp

@nmoell Thank you for raising the issue and being patient with it. I have again escalated the issue internally, and will report back as soon as I get an update. In the meantime, would it be possible for you to share any reproducible prompts where you have been observing the issue? Or if you can print the response object and share the “finish_reason” value so that we know what is actually causing the issue.

I can't disclose the prompt since it is using internal information (just some facts about internal shipping policies) which isn't in any sense offensive or unsafe. What I can share is the response object:

 {
        "candidates": [
            {
                "content": {
                    "role": "model",
                    "parts": []
                },
                "finish_reason": 4,
                "safety_ratings": [
                    {
                        "category": 1,
                        "probability": 1,
                        "probability_score": 0.16438228,
                        "severity": 1,
                        "severity_score": 0.0715912,
                        "blocked": false
                    },
                    {
                        "category": 2,
                        "probability": 1,
                        "probability_score": 0.33458945,
                        "severity": 2,
                        "severity_score": 0.29158565,
                        "blocked": false
                    },
                    {
                        "category": 3,
                        "probability": 1,
                        "probability_score": 0.15507847,
                        "severity": 1,
                        "severity_score": 0.1261379,
                        "blocked": false
                    },
                    {
                        "category": 4,
                        "probability": 1,
                        "probability_score": 0.072374016,
                        "severity": 1,
                        "severity_score": 0.06548521,
                        "blocked": false
                    }
                ],
                "citation_metadata": {
                    "citations": [
                        {
                            "start_index": 151,
                            "end_index": 400,
                            "uri": "https://www.REMOVED-OUR-WEBSITE.com/service/",
                            "title": "",
                            "license_": ""
                        },
                        {
                            "start_index": 177,
                            "end_index": 400,
                            "uri": "",
                            "title": "",
                            "license_": ""
                        }
                    ]
                },
                "index": 0
            }
        ],
        "usage_metadata": {
            "prompt_token_count": 849,
            "total_token_count": 849,
            "candidates_token_count": 0
        }
    }

nmoell avatar Mar 22 '24 13:03 nmoell

Hi,

I encounter the same problem. When I do, the finish_reason is OTHER. What's the meaning of that? It seems to happen randomly with certain prompts.

Here's a small example with a prompt:

from vertexai.preview import generative_models
import vertexai
from vertexai.preview.generative_models import GenerativeModel

model_params = {
    "temperature": 0.0,
}

safety_config = {
    generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_UNSPECIFIED: generative_models.HarmBlockThreshold.BLOCK_NONE,
    }

vertexai.init() # environment variables are set

model = GenerativeModel("gemini-pro")

PROMPT = "Translate the following to Swiss German: 'Hi, my name is Sara.'"


response = model.generate_content(PROMPT, generation_config=model_params, safety_settings=safety_config)

print(response)
print(response.text)

Output:

candidates {
  finish_reason: OTHER
}
usage_metadata {
  prompt_token_count: 15
  total_token_count: 15
}


...   
AttributeError: Content has no parts.

rogerwelo avatar Mar 22 '24 15:03 rogerwelo

I created PR https://github.com/googleapis/python-aiplatform/pull/3518 for the Vertex AI Python SDK to improve the error messages for this behavior. It will throw a ResponseValidationError with specific details as to why the content is empty/blocked.

You can try diagnosing the issue with this code, before the SDK gets updated:

response = model.generate_content(PROMPT, generation_config=model_params, safety_settings=safety_config)

message = ""
if not response.candidates or response._raw_response.prompt_feedback:
    message += (
        f"The model response was blocked due to {response._raw_response.prompt_feedback.block_reason}.\n"
        f"Block reason message: {response._raw_response.prompt_feedback.block_reason_message}.\n"
    )
else:
    candidate = response.candidates[0]
    message = (
        "The model response did not complete successfully.\n"
        f"Finish reason: {candidate.finish_reason.name}.\n"
        f"Finish message: {candidate.finish_message}.\n"
        f"Safety ratings: {candidate.safety_ratings}.\n"
    )

print(message)

Note - this doesn't explain the issue when the API returns OTHER and no block reason. I've been able to reproduce this behavior, and I've not been able to find a definitive reason for it.

holtskinner avatar Mar 28 '24 19:03 holtskinner

For this specific issue, the product development team confirmed the response is blocked by the language filter. I'm going to work on getting this type of error to output a more specific message.

This is the list of supported languages https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#language-support

Hi,

I encounter the same problem. When I do, the finish_reason is OTHER. What's the meaning of that? It seems to happen randomly with certain prompts.

Here's a small example with a prompt:

from vertexai.preview import generative_models
import vertexai
from vertexai.preview.generative_models import GenerativeModel

model_params = {
    "temperature": 0.0,
}

safety_config = {
    generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_UNSPECIFIED: generative_models.HarmBlockThreshold.BLOCK_NONE,
    }

vertexai.init() # environment variables are set

model = GenerativeModel("gemini-pro")

PROMPT = "Translate the following to Swiss German: 'Hi, my name is Sara.'"


response = model.generate_content(PROMPT, generation_config=model_params, safety_settings=safety_config)

print(response)
print(response.text)

Output:

candidates {
  finish_reason: OTHER
}
usage_metadata {
  prompt_token_count: 15
  total_token_count: 15
}


...   
AttributeError: Content has no parts.

holtskinner avatar Mar 28 '24 20:03 holtskinner