vanna icon indicating copy to clipboard operation
vanna copied to clipboard

sqlcoder LLM support?

Open andreped opened this issue 3 months ago • 4 comments

Is your feature request related to a problem? Please describe. There has been added support to several proprietary and open-source LLMs to Vanna.

However, it seems like one open-source LLM variant outperforms LLMs like GPT-4 and Claude-2.0 on SQL completion tasks:

image

I think it would be highly relevant to the community to add official support for it in the framework. Even a 7B param model outperforms GPT-4. Hence, for SQL completion tasks, this model seems like a nobrainer to use: https://github.com/defog-ai/sqlcoder

Describe the solution you'd like

Different sqlcoder LLMs can be used through a common API, similarly to Ollama: https://github.com/vanna-ai/vanna/blob/main/src/vanna/ollama/ollama.py


@zainhoda I can make a PR to add support for this LLM.

andreped avatar Mar 21 '24 13:03 andreped

I've used the 7b of sqlcoder via Ollama and found it to be extremely slow for some reason compared to models like mistral.

I think if we use sqlcoder 70b it pretty much has to be via some API. Is there an API you were thinking of using?

zainhoda avatar Mar 21 '24 14:03 zainhoda

Here's a benchmark that I ran:

vanna-llm-sql-benchmark-2024-03-20

For the ones in purple, they were set up like this:

class Vanna_Ollama(ChromaDB_VectorStore, Ollama):
    def __init__(self, config=None):
        ChromaDB_VectorStore.__init__(self, config=config)
        Ollama.__init__(self, config=config)

vn = Vanna_Ollama(config={'model': 'sqlcoder', 'path': path})

I'm not sure we need to do anything additional for running locally

zainhoda avatar Mar 21 '24 14:03 zainhoda

I've used the 7b of sqlcoder via Ollama and found it to be extremely slow for some reason compared to models like mistral.

I think if we use sqlcoder 70b it pretty much has to be via some API. Is there an API you were thinking of using?

There is documentation here on which API to use: https://github.com/defog-ai/sqlcoder/blob/main/inference.py#L67

I can do some simple performance benchmarks, if you'd like. If possible, I can do this benchmark in CoLab. I have managed to run 7B models in CoLab before, but could be that this model goes beyond the limit (RAM or VRAM).


EDIT: Could you share me the exact code you used to reproduce the benchmark for sqlcoder, as well as which dataset you used? Perhaps it was public?

andreped avatar Mar 21 '24 14:03 andreped

Here's a benchmark that I ran:

vanna-llm-sql-benchmark-2024-03-20

For the ones in purple, they were set up like this:

class Vanna_Ollama(ChromaDB_VectorStore, Ollama):
    def __init__(self, config=None):
        ChromaDB_VectorStore.__init__(self, config=config)
        Ollama.__init__(self, config=config)

vn = Vanna_Ollama(config={'model': 'sqlcoder', 'path': path})

I'm not sure we need to do anything additional for running locally

I assume your benchmark runs the vanna functions as is without catering to the prompt format of the sql expert open LLMs hence the poor performance. Once that is catered, it really is marginally better that gpt-3.5 and comparable to gpt-4

emraza1 avatar Apr 12 '24 23:04 emraza1