vanna
vanna copied to clipboard
Columns passed in RAG seem ignored in generated SQL
Hi,
I'm trying to create a SQL query that would make use of valid column names passed to the RAG via INFORMATION_SCHEMA.COLUMN
. However, the obtained query does not mention any real column.
import vanna
from vanna.remote import VannaDefault
from vanna.openai.openai_chat import OpenAI_Chat
from vanna.chromadb.chromadb_vector import ChromaDB_VectorStore
# Globals
PROJECT_ID = "my_gcp_project"
DATASET_ID = "my_dataset"
TABLE_NAME = "my_table"
OPENAI_API_KEY = "sk-xxxxx"
# Class instantiation
class MyVanna(ChromaDB_VectorStore, OpenAI_Chat):
def __init__(self, config=None):
ChromaDB_VectorStore.__init__(self, config=config)
OpenAI_Chat.__init__(self, config=config)
vn = MyVanna(config={'api_key': OPENAI_API_KEY, 'model': 'gpt-4'})
vn.connect_to_bigquery(project_id=PROJECT_ID)
# The below query is working on BigQuery console.
METADATA_QUERY = f"""
SELECT * FROM `{PROJECT_ID}.{DATASET_ID}.INFORMATION_SCHEMA.COLUMNS`
WHERE table_name = {TABLE_NAME}
"""
df_information_schema = vn.run_sql(METADATA_QUERY)
# This will break up the information schema into bite-sized chunks that can be referenced by the LLM
plan = vn.get_training_plan_generic(df_information_schema)
# If you like the plan, then uncomment this and run it to train
vn.train(plan=plan)
vn.ask(question="Where does John Deere live ?")
# Obtained query
"""
SELECT residence
FROM Authors
WHERE name = 'John Deere';
"""
Neither the fields residence
or name
exist, nor the table Authors
I also get the following error by the way..
Couldn't run sql: exceptions must derive from BaseException
Thanks for your help !
Try describing your table using create table syntax as a ddl?
On Thu, 25 Jan 2024, 21:24 Pierre Oberholzer, @.***> wrote:
Hi,
I'm trying to create a SQL query that would make use of valid column names passed to the RAG via INFORMATION_SCHEMA.COLUMN. However, the obtained query does not mention any real column, and therefore fails on the DB.
` import vanna from vanna.remote import VannaDefault from vanna.openai.openai_chat import OpenAI_Chat from vanna.chromadb.chromadb_vector import ChromaDB_VectorStore Globals
PROJECT_ID = "my_gcp_project" DATASET_ID = "my_dataset" TABLE_NAME = "my_table" OPENAI_API_KEY = "sk-xxxxx" Class instantiation
class MyVanna(ChromaDB_VectorStore, OpenAI_Chat): def init(self, config=None): ChromaDB_VectorStore.init(self, config=config) OpenAI_Chat.init(self, config=config)
vn = MyVanna(config={'api_key': OPENAI_API_KEY, 'model': 'gpt-4'})
vn.connect_to_bigquery(project_id=PROJECT_ID) The below query is working on BigQuery console.
METADATA_QUERY = f""" SELECT * FROM {PROJECT_ID}.{DATASET_ID}.INFORMATION_SCHEMA.COLUMNS WHERE table_name = {TABLE_NAME} """
df_information_schema = vn.run_sql(METADATA_QUERY) This will break up the information schema into bite-sized chunks that can be referenced by the LLM
plan = vn.get_training_plan_generic(df_information_schema) If you like the plan, then uncomment this and run it to train
vn.train(plan=plan)
vn.ask(question="Where does John Deere live ?") Obtained query
""" SELECT residence FROM Authors WHERE name = 'John Deere'; """
Neither the fields
residence
orname
exist, nor the tableAuthors
I also get the following error by the way..
Couldn't run sql: exceptions must derive from BaseException
Thanks for your help !
β Reply to this email directly, view it on GitHub https://github.com/vanna-ai/vanna/issues/187, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGK4ZK63QDWUJUKV3W3HSETYQKWOTAVCNFSM6AAAAABCLBSCWCVHI2DSMVQWIX3LMV43ASLTON2WKOZSGEYDCMBSGM2DONY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Trying the DDL as you suggest: same issues as in the first trial above.
df_information_schema = vn.run_sql(METADATA_QUERY)
res = []
for index, row in df_information_schema.iterrows():
col_name = row["column_name"]
type = row["data_type"]
elem = f"{col_name} {type}"
res.append(elem)
res_string = "(" + ", ".join(res) + ")"
DDL = f"""CREATE TABLE {TABLE_NAME_VANNA} {res_string}"""
vn.train(ddl=DDL)
@pierreoberholzer did the training data make it in? Do you get results when you do vn.get_training_data()
?
Good idea. It seems it at least received meaningful info (just showing first 6 columns here)..
vn.get_training_data()
@pierreoberholzer are those the actual names of your columns? If so, there's no way the LLM would be able to associate the column name with that it means semantically.
With ambiguous or nonexistent column names, the best method is going to be training on example sql statements because the database schema doesn't have enough information. Like if a human wouldn't be able to figure out how to translate "Where does John Deere live ?" into SQL based on the information in that image, then the LLM wouldn't be able to either.
Try this -- try training on 3-4 sample SQL queries that you know work. So do:
vn.train(sql=...)
Then try asking questions that are related to those queries.
If that works, since you're using BigQuery, what you can do is extract your query history and loop over the query history to do vn.train(sql=...)
Those are dummy column names, the real ones have some semantic meaning. Still, I would expect in any case the generated SQL to be constrained on existing column names (even with poor semantic distance). In addition, once that is solved, passing examples is indeed surely a good thing to do. Thanks.
Still investigating, but it seems that I am reaching some limit..
- Is the entire metadata context passed to OpenAI ?
- Am I supposed to purge Chroma DB ?
Indeed, the query itself very short: "Where does John Deere live ?"
Thanks !
Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, your messages resulted in 10898 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
@pierreoberholzer it picks the 10 most relevant pieces of each type of training data (ddl, documentation, question/sql pairs) and adds them to the context. It does a simple heuristic to calculate tokens to attempt not to overfill the context window.
However, it seems like your individual pieces of training data may be quite large? What's the string length of the content?
Which OpenAI model are you using btw? You likely won't reach context token limits if you use gpt-4-turbo-preview
Thanks - Using gpt-4-turbo-preview
helps.
How can I reset the Chroma state, and start again with new context ?
Currently I'm accumulating context it seems.
@pierreoberholzer you can either delete the sqlite database that Chroma creates and start again or you can go
vn.get_training_data()
and then loop through all the ids and to vn.remove_training_data(id=...)
Cool. This helps defining the experiment better. My observation so far is that amongst metadata, ddl and sql, only the latter leads to meaningful generated queries. While this seems promising, and I need to investigate more, it's actually more difficult to assess how much the approach now might "overfit" given the query, compared to a more "neutral" case where only metadata would be passed (see first questions) in training. Needs more testing. Thanks for your help !
@pierreoberholzer if you're doing a formal test, would you be able to kindly share the results?
Sure, if I get to that point.