NeumAI
NeumAI copied to clipboard
Structured Search Pipeline
Querying requirements across RAG fall not only onto unstructured data that has been embedded and added to an vector database. It also falls onto structured data sources where semantic search doesn't really make sense.
Goal: Provide a pipeline interface that connects to a structured data source and generates queries in real-time based on queries.
Implementation:
- Psuedo
Pipelinewithout an embed or sink connector, just a data source. - Data source connector is configured and an initial pull from the database is done to examine the fields available and their types.
Searchgenerates a query using an LLM based on the fields available in the database.- The Pipeline can be used as part of a
PipelineCollectionand supported bysmart_routein order for model to decide when to use it.
Alternative implementation:
- In order to reduce the latency associated with having to do 2-3 back to back LLM calls to generated query and validate it, what if the query generation was done pre-emptively and cached in to a vector database.
- Using an LLM, we would try to predict the top sets of queries that one might expect from the database and its permutations. (This might limit the complexity of the queries, but might answer for 80% of use cases)
- At
searchwe would run a similarity search of the incoming query against the description of the "cached" queries. We then can run top query against the database.
@ddematheu Haven't yet fully understood this, but the alternatives sound similar to internals of this project - aidb. Can you please give an example to elaborate this. As far as I understood, we have some structured data sources. Now we want to map a natural language query to an appropriate SQL query(or any structured query) using an LLM.
The thought process was given a database, to generate a set of common queries for it (based on schema) using an LLM. Fron there take the queries amd the descriptions for them and embed them (embed the description). Then at runtine when someone searches, we take he search and compare against the embeddings and use the stored query to query the database (or pass into a database for fine tuning based on the search)
It is a bit more similar to this https://github.com/vanna-ai/vanna.
@ddematheu Okay, so I understood it like this and tried it on t5-small-text-2-sql model:
input_prompt = '''
tables:\n CREATE TABLE engineers (id: VARCHAR, name: TEXT, age: INT); \n
query for: Group by the age 'column'
'''
print("Generted SQL:")
generate_sql(input_prompt=input_prompt)
Output:
Generted SQL:
'SELECT name, age FROM engineers GROUP BY age'
So we would create pairs and embed the description,
{'query': 'SELECT name, age FROM engineers GROUP BY age', 'description': 'Group by the age column'}
Is this what you meant?
Update: Also tried with a small cpu-ready LLM