pandas-ai
pandas-ai copied to clipboard
Please add local LLM support via LM Studio
🚀 The feature
Dear devs, great project. Would be awesome if we could add support for LM Studio and the local server API given the rise in popularity.
Local API: `# Example: reuse your existing OpenAI setup import os import openai
openai.api_base = "http://localhost:1234/v1" # point to the local server openai.api_key = "" # no need for an API key
completion = openai.ChatCompletion.create( model="local-model", # this field is currently unused messages=[ {"role": "system", "content": "Always answer in rhymes."}, {"role": "user", "content": "Introduce yourself."} ] )
print(completion.choices[0].message)`
Motivation, pitch
LM Studio is the best GUI for local LLM
Alternatives
No response
Additional context
No response
Check my fork version. Currently, it supports such example.
import pandas as pd
from pandasai import SmartDataframe
# Sample DataFrame
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12]
})
from pandasai.llm import CustOpenAI
llm = CustOpenAI(api_base= "http://127.0.0.1:1234/v1", model_name = "local-model")
df_llm = SmartDataframe(df, config={"llm": llm})
df_llm.chat('Which are the 5 happiest countries?')
Check my fork version. Currently, it supports such example.
import pandas as pd from pandasai import SmartDataframe # Sample DataFrame df = pd.DataFrame({ "country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"], "gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064], "happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12] }) from pandasai.llm import CustOpenAI llm = CustOpenAI(api_base= "http://127.0.0.1:1234/v1", model_name = "local-model") df_llm = SmartDataframe(df, config={"llm": llm}) df_llm.chat('Which are the 5 happiest countries?')
Thanks. I tried it, but it seems to not work for me.
import pandas as pd
from pandasai import SmartDataframe
# Sample DataFrame
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12]
})
from pandasai.llm import CustOpenAI
llm = CustOpenAI(api_base="http://localhost:1234/v1", model_name="local-model")
df_llm = SmartDataframe(df, config={"llm": llm})
response = df_llm.chat('Which are the 5 happiest countries?')
print(response)
Output:
Unfortunately, I was not able to answer your question, because of the following error:
Error code: 401 - {'error': {'message': 'Incorrect API key provided: OPENAI_A****OKEN. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Process finished with exit code 0
BTW: LM Studio updated the API proxy syntax to this:
# Example: reuse your existing OpenAI setup
from openai import OpenAI
# Point to the local server
client = OpenAI(base_url="http://localhost:1234/v1", api_key="not-needed")
completion = client.chat.completions.create(
model="local-model", # this field is currently unused
messages=[
{"role": "system", "content": "Always answer in rhymes."},
{"role": "user", "content": "Introduce yourself."}
],
temperature=0.7,
)
print(completion.choices[0].message)
I checked LM-studio Version 0.2.8 (0.2.8)
and it works on Llama2-13-q4 (among others). Please run the following example
import pandas as pd
from pandasai import SmartDataframe
import logging
logging.basicConfig(filename='_temp_.log', level=logging.INFO)
logging.info('Started')
# Sample DataFrame
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12]
})
# Instantiate a LLM
from pandasai.llm import CustOpenAI
llm = CustOpenAI(api_base = "http://localhost:1378/v1", model_name = "local-model")
llm.chat_completion("Hi")
Please raise an Issue in my fork, I will handle it.
You could find the log file in _temp_.log
.
Hint: Adjusting prompts to align with models other than OpenAI is crucial. It is observed that many language models (LLMs) do not consistently adhere to the provided instructions. Consequently, the reliability of extracting code from the generated output becomes unstable.
Thanks for the update. I am running LM Studio 0.2.9 and LLama2-13-q6. I cannot open an issue in your fork (seems you need to enable it first).
Here is the error, running the code you shared:
line 930, in _request
raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: OPENAI_A****OKEN. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Log output:
INFO:root:Started
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 401 Unauthorized"
INFO:root:Started
INFO:root:Started
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 401 Unauthorized"
@yfyang86 would be great to get this up and running.
Updated. Although the performance on generating charts is not stable fir mixtral 8x7B-Q4 among llam2-13B and others, it works on LM Studio (0.2.10). Checkout here.
dear @janhp and dear @yfyang86 thanks for opening issue, I also want to use a local LLM through LLMstudio API, but it seems that I cannot import CustOpenAI, I get this error: ImportError: cannot import name 'CustOpenAI' from 'pandasai.llm' (/usr/local/lib/python3.10/dist-packages/pandasai/llm/init.py)
I would be very grateful for any help Thank you Aymen
dear @janhp and dear @yfyang86 thanks for opening issue, I also want to use a local LLM through LLMstudio API, but it seems that I cannot import CustOpenAI, I get this error: ImportError: cannot import name 'CustOpenAI' from 'pandasai.llm' (/usr/local/lib/python3.10/dist-packages/pandasai/llm/init.py)
I would be very grateful for any help Thank you Aymen
Same issue here @yfyang86
Dear @janhp you shouldnt install the original package of pandasai through pip install.
Instead you should use this: git clone https://github.com/yfyang86/pandas-ai cd pandas-ai git checkout cust-openai git pull python -m pip install .
Then it would be possible to use CustOpenAI. However, I was not able to use the LMStudio API throught the function CustOpenAI, let me know if it works for you. thanks
is there any updates on this ? Is there a working solution ?
@georgepoly-maker unfortunately not.
However it would be great if anyone has some time and wants to pick this issue to work on. Just let me know, happy to assign the issue!
I pulled the latest; I tested it on the collab, I tested locally, and every single time,
- Double checked to make sure I had the correct branch [custopenai.py] lm studios main method to call is:
client = OpenAI(base_url="http://localhost:1234/v1", api_key="not-needed")
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: null. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
``
import pandas as pd
from pandasai import SmartDataframe
# Sample DataFrame
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12]
})
from pandasai.llm import CustOpenAI
_host_url_ = "http://localhost:1234/v1"
_port_number_ = '1234'
_llm_version_ = 'v1'
llm = CustOpenAI(api_base = f"{_host_url_}:{_port_number_}/{_llm_version_}", api_token = "null", model_name = "local-model")
# T1:
llm.chat_completion('Hi')
# T2:
df_llm = SmartDataframe(df, config={"llm": llm})
df_llm.chat('Which are the 5 happiest countries?')
df_llm.chat('What is the sum of the GDPs of the 2 unhappiest countries?')
Now the branch should work. Please check issue-1 for details:
Tl;dr
- openai==1.12.0 renames
api_base
tobase_url
, and base_url may not work - llm studio==0.2.14 provides something like
http://localhost:1378/v1/chat/completions
only. Hencechat
vscompletion
mode should be modified accordingly.
Awesome! I did get this working but great to see the integration!
Preston McCauley Principal UX Strategist AR / VR Design Innovation office: 214.774.2390 www.tonic3.com
On Sun, Feb 25, 2024 at 9:35 AM Yifan @.***> wrote:
Now the branch should work. Please check issue-1 https://github.com/yfyang86/pandas-ai/issues/1 for details:
Tl;dr
- openai==1.12.0 renames api_base to base_url, and base_url may not work
- llm studio==0.2.14 provides something like http://localhost:1378/v1/chat/completions only. Hence chat vs completion mode should be modified accordingly.
— Reply to this email directly, view it on GitHub https://github.com/Sinaptik-AI/pandas-ai/issues/799#issuecomment-1962976814, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQLOTUDJ4YV2GEBTLE4RA33YVNK4DAVCNFSM6AAAAABAEB2YJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRSHE3TMOBRGQ . You are receiving this because you commented.Message ID: @.***>
Requested feature has been merged. This issue can be closed.