feat(vertex-ai) convert "example syntax" markdown samples to python
Description
Convert several Markdown samples to Python. The goal is to preserve the "spirit" of original samples (i.e., to provide a very short snippet that only shows how to operate an API on a very high-level), while making it actually copyable, testable, and working. That's why I even moved the init calls outside of the sample but added all necessary imports for proper sample functioning.
- The "Inference -> Example syntax" section
- Text and Multimodal Embedding examples.
Checklist
- [x] I have followed Sample Guidelines from AUTHORING_GUIDE.MD
- [x] Tests pass:
nox -s py-3.9&&nox -s py-3.12 - [x] Lint pass:
nox -s lint
Here is the summary of changes.
You are about to add 4 region tags.
-
generative_ai/embeddings/multimodal_example_syntax.py:27, tag
generativeaionvertexai_multimodal_embedding_example_syntax -
generative_ai/embeddings/text_example_syntax.py:27, tag
generativeaionvertexai_text_embedding_example_syntax -
generative_ai/inference/example_syntax.py:27, tag
generativeaionvertexai_example_syntax -
generative_ai/inference/example_syntax.py:43, tag
generativeaionvertexai_example_syntax_streaming
This comment is generated by snippet-bot.
If you find problems with this result, please file an issue at:
https://github.com/googleapis/repo-automation-bots/issues.
To update this comment, add snippet-bot:force-run label or use the checkbox below:
- [ ] Refresh this comment
https://btx.cloud.google.com/invocations/27f9454f-a97e-4e74-a5ce-0612115802bd/log :
test_embeddings_examples.py::test_multimodal_embedding_image_video_text PASSED [ 11%]
test_embeddings_examples.py::test_multimodal_embedding_video PASSED [ 22%]
test_embeddings_examples.py::test_multimodal_embedding_image PASSED [ 33%]
test_embeddings_examples.py::test_generate_embeddings_with_lower_dimension PASSED [ 44%]
test_embeddings_examples.py::test_create_embeddings PASSED [ 55%]
test_embeddings_examples.py::test_create_text_embeddings PASSED [ 66%]
test_embeddings_examples.py::test_text_embed_text PASSED [ 77%]
test_embeddings_examples.py::test_code_embed_text PASSED [ 88%]
test_embeddings_examples.py::test_tune_embedding_model FAILED [100%]
=================================== FAILURES ===================================
__________________________ test_tune_embedding_model ___________________________
Traceback (most recent call last):
...
# Val: Skipping the pytest/runner stacktrace.
...
result = testfunction(**testargs)
File "/workspace/generative_ai/embeddings/test_embeddings_examples.py", line 133, in test_tune_embedding_model
tuning_job = model_tuning_example.tune_embedding_model(
File "/workspace/generative_ai/embeddings/model_tuning_example.py", line 42, in tune_embedding_model
tuning_job = base_model.tune_model(
File "/workspace/generative_ai/embeddings/.nox/py-3-9/lib/python3.9/site-packages/vertexai/language_models/_language_models.py", line 2344, in tune_model
return super().tune_model(
File "/workspace/generative_ai/embeddings/.nox/py-3-9/lib/python3.9/site-packages/vertexai/language_models/_language_models.py", line 367, in tune_model
return self._tune_model(
File "/workspace/generative_ai/embeddings/.nox/py-3-9/lib/python3.9/site-packages/vertexai/language_models/_language_models.py", line 422, in _tune_model
if _is_text_embedding_tuning_pipeline(model_info.tuning_pipeline_uri):
File "/workspace/generative_ai/embeddings/.nox/py-3-9/lib/python3.9/site-packages/vertexai/language_models/_language_models.py", line 4010, in _is_text_embedding_tuning_pipeline
return pipeline_uri.startswith(
AttributeError: 'NoneType' object has no attribute 'startswith'
---- generated xml file: /workspace/generative_ai/embeddings/sponge_log.xml ----
=========================== short test summary info ============================
FAILED test_embeddings_examples.py::test_tune_embedding_model - AttributeErro...
========================= 1 failed, 8 passed in 21.42s =========================
The test runs perfectly fine locally. I'm trying to understand why it fails on CI.
The CI failure in test_tune_embedding_model seems to stem from an AttributeError: 'NoneType' object has no attribute 'startswith' within the vertexai library. This suggests that a variable expected to be a string (likely pipeline_uri) is instead None. Since the test passes locally, the issue might be related to environment differences between your local setup and the CI environment. Possible causes include:
-
Missing environment variables: Ensure that all necessary environment variables (like
GOOGLE_CLOUD_PROJECT) are correctly set in the CI environment. Double-check their values for accuracy. -
Library version mismatch: Verify that the
vertexailibrary version used in CI is the same as your local version. Inconsistencies can lead to unexpected behavior. - Network connectivity: The CI environment might have network restrictions that prevent the code from accessing necessary resources (like the Vertex AI model). Check the CI logs for any network-related errors.
-
Resource availability: Confirm that the required Vertex AI resources (models, pipelines) are properly provisioned and accessible in the project specified by
GOOGLE_CLOUD_PROJECTwithin the CI environment. Check for quota issues or resource limits. - Permissions: Ensure that the service account used by the CI system has the necessary permissions to access the Vertex AI resources.
Debugging steps:
-
Print
pipeline_uri: Add aprint(pipeline_uri)statement just before thestartswithcall in_is_text_embedding_tuning_pipelineto see its value in the CI logs. This will pinpoint whether it's actuallyNoneand provide a clue about the root cause. - Check CI logs: Carefully examine the complete CI logs for any error messages or warnings that might indicate the problem. Look for clues about environment variables, network issues, or resource limitations.
-
Simplify the test: Create a minimal, isolated test case that only calls
tune_embedding_modelwith the simplest possible inputs. If this minimal test also fails, it will help isolate the problem to thevertexailibrary itself or the CI environment. -
Isolate the
vertexailibrary: Try running the test locally in a clean virtual environment with only thevertexailibrary installed to rule out conflicts with other dependencies.
I created a new issue to track the embedding test failure on CI. Note, that the failure is not related to this PR — it is reproducible on main and with no code changes.
I see AttributeError: 'NoneType' object has no attribute 'startswith' for test_tune_embedding_model test
@msampathkumar, please see the comment above. It's reported as https://github.com/GoogleCloudPlatform/python-docs-samples/issues/12995.
Putting a hold on all Generative AI samples development.
Thank you for your work.