weaviate-python-client
weaviate-python-client copied to clipboard
[Proposal] Enhancements to generative queries
Proposal: Could we create new functions for wrapping generative capabilities? With:
- Mandatory prompt and model parameters
- Optional search_results parameters from Weaviate
This will allow a user to :
- Prompt an LLM (without additional retrieved data)
- Perform RAG from a Weaviate search response
- Perform RAG from multiple Weaviate search responses
- Pre-process to formulate a custom LLM prompt
Syntax proposal:
import weaviate
from weaviate.classes.config import Generative
from weaviate.classes.generate import generate_text
client = weaviate.connect_to_local()
gen_model = Generative.aws(
model="cohere.command-text-v14",
region="us-east-1"
),
# 💡 >>> SCENARIO 1 <<< Standalone LLM prompt
response = generate_text(
model=gen_model,
prompt="What is the capital of France?",
)
# 💡 >>> SCENARIO 2 <<< RAG with a Weaviate response
wiki = client.collections.get("Wiki")
search_response = wiki.query.hybrid("Afrian or European swallow")
response = generate_text(
model=gen_model,
prompt="Could a swallow carry a coconut?",
search_response=search_response
)
# 💡 >>> SCENARIO 3 <<< RAG with TWO Weaviate responses!
wiki = client.collections.get("Wiki")
scripts = client.collections.get("Scripts")
wiki_response = wiki.query.hybrid("Afrian or European swallow")
scripts_response = scripts.query.hybrid("Afrian or European swallow")
response = generate_text(
model=gen_model,
prompt="Could a swallow carry a coconut?",
search_response=[wiki_response, scripts_response]
)
# 💡 >>> SCENARIO 4 <<< RAG with transformed text
wiki = client.collections.get("Wiki")
search_response = wiki.query.hybrid("Afrian or European swallow")
context = "\n\n".join([f'{o["title"]}: {o["chunk"]}' for o in search_response.objects])
response = generate_text(
model=gen_model,
prompt="Could a swallow carry a coconut? Answer based on the following information:\n\n" + context,
)