localGPT
localGPT copied to clipboard
run_localGPT_API
C:\Users\jiaojiaxing.conda\envs\localgpt\python.exe E:\jjx\localGPT\apiceshi.py load INSTRUCTOR_Transformer max_seq_length 512 bin C:\Users\jiaojiaxing.conda\envs\localgpt\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll Loading checkpoint shards: 100%|██████████| 3/3 [01:32<00:00, 30.91s/it]
- Serving Flask app 'apiceshi'
- Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
- Running on http://127.0.0.1:5111 INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug:127.0.0.1 - - [20/Apr/2024 21:56:27] "GET / HTTP/1.1" 404 - INFO:werkzeug:127.0.0.1 - - [20/Apr/2024 22:13:36] "GET /api/delete_source HTTP/1.1" 200 - INFO:werkzeug:127.0.0.1 - - [20/Apr/2024 22:16:18] "GET / HTTP/1.1" 404 - INFO:werkzeug:127.0.0.1 - - [20/Apr/2024 22:16:26] "GET / HTTP/1.1" 404 -
When I open the url:http://127.0.0.1:5111 The web shows:
Not Found The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
run_localGPT_API.py contains a set of api listeners. By default, there's no main entry and the requested URL (i.e. http://127.0.0.1:5111/) will not be processed.
The valid api calls are related to below
https://github.com/PromtEngineer/localGPT/blob/e997a8a09545b655990dea641b548aeac9abcca5/run_localGPT_API.py#L87
https://github.com/PromtEngineer/localGPT/blob/e997a8a09545b655990dea641b548aeac9abcca5/run_localGPT_API.py#L99
https://github.com/PromtEngineer/localGPT/blob/e997a8a09545b655990dea641b548aeac9abcca5/run_localGPT_API.py#L116
https://github.com/PromtEngineer/localGPT/blob/e997a8a09545b655990dea641b548aeac9abcca5/run_localGPT_API.py#L161
For example,
Calling GET:
http://127.0.0.1:5111/api/run_ingest
would ingest the document.
run_localGPT_API.py contains a set of api listeners. By default, there's no main entry and the requested URL (i.e. http://127.0.0.1:5111/) will not be processed.
The valid api calls are related to below
https://github.com/PromtEngineer/localGPT/blob/e997a8a09545b655990dea641b548aeac9abcca5/run_localGPT_API.py#L87
https://github.com/PromtEngineer/localGPT/blob/e997a8a09545b655990dea641b548aeac9abcca5/run_localGPT_API.py#L99
https://github.com/PromtEngineer/localGPT/blob/e997a8a09545b655990dea641b548aeac9abcca5/run_localGPT_API.py#L116
https://github.com/PromtEngineer/localGPT/blob/e997a8a09545b655990dea641b548aeac9abcca5/run_localGPT_API.py#L161
For example,
Calling GET:
http://127.0.0.1:5111/api/run_ingest
would ingest the document.
Thanks for your reply. I opened http://127.0.0.1:5111/api/run_ingest in Chroma,but is shows:Not Found The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. And the pycharm termential shows:load INSTRUCTOR_Transformer max_seq_length 512 bin C:\Users\jiaojiaxing.conda\envs\localgpt\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll Loading checkpoint shards: 100%|██████████| 3/3 [00:56<00:00, 18.72s/it]
- Serving Flask app 'apiceshi'
- Debug mode: on INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
- Running on http://127.0.0.1:5110 INFO:werkzeug:Press CTRL+C to quit INFO:werkzeug: * Restarting with watchdog (windowsapi) load INSTRUCTOR_Transformer max_seq_length 512 bin C:\Users\jiaojiaxing.conda\envs\localgpt\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll Loading checkpoint shards: 100%|██████████| 3/3 [01:00<00:00, 20.05s/it] WARNING:werkzeug: * Debugger is active! INFO:werkzeug: * Debugger PIN: 772-901-304 INFO:werkzeug:127.0.0.1 - - [10/May/2024 10:37:01] "GET /run_ingest HTTP/1.1" 404 - INFO:werkzeug:127.0.0.1 - - [10/May/2024 10:37:01] "GET /favicon.ico HTTP/1.1" 404 - How can I solve this matter?
Running on http://127.0.0.1:5110/
@Suiji12 Your log shows the server is running on port 5110. Therefore, when you open the URL with port 5111, the request URL would be not found. Based on your settings, using the URL below should work.
http://127.0.0.1:5110/api/run_ingest
Please look for which port the server is running on as the originally your port was 5111 but it is 5110 this time.
在 http://127.0.0.1:5110/ 上运行
日志显示服务器正在端口 5110 上运行。因此,当您打开端口为 5111 的 URL 时,将找不到请求 URL。根据您的设置,使用下面的 URL 应该可以工作。
http://127.0.0.1:5110/api/run_ingest
请查找服务器运行在哪个端口上,因为最初您的端口是 5111,但这次是 5110。 Follow this link:http://127.0.0.1:5110/api/run_ingest. Loading checkpoint shards: 100%|██████████| 3/3 [00:50<00:00, 16.96s/it]
- Serving Flask app 'run_localGPT_API'
- Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
- Running on http://127.0.0.1:5110 INFO:werkzeug:Press CTRL+C to quit Error: E:\jjx\localGPT/Cheshidatabase\chroma.sqlite3 - Another program is currently using this file, preventing access by the process."
在 http://127.0.0.1:5110/ 上运行
日志显示服务器正在端口 5110 上运行。因此,当您打开端口为 5111 的 URL 时,将找不到请求 URL。根据您的设置,使用下面的 URL 应该可以工作。
http://127.0.0.1:5110/api/run_ingest
请查找服务器运行在哪个端口上,因为最初您的端口是 5111,但这次是 5110。
Thanks for your reply, how can I solve this problem?
Error: E:\jjx\localGPT/Cheshidatabase\chroma.sqlite3 - Another program is currently using this file, preventing access by the process."
If I have to guess, you did some customization on the localGPT since the path shouldn't be Cheshidatabase\chroma.sqlite3
but the built-in path was DB\chroma.sqlite3
. There's another process that was accessing the database. You have two options:
- Locate which process that was accessing the database and kill the task assoicated with it.
- restart computer and start from fresh. Sometimes the problem should be resolved this way.
but if you have customized the localGPT somehow, you need to be aware of the potential issues of doing that.
错误:E:\jjx\localGPT/Cheshidatabase\chroma.sqlite3 - 另一个程序当前正在使用此文件,阻止进程访问。
如果我不得不猜测,您对 localGPT 进行了一些自定义,因为路径不应该是,但内置路径是 .还有另一个进程正在访问数据库。您有两种选择:
Cheshidatabase\chroma.sqlite3``DB\chroma.sqlite3
- 找到正在访问数据库的进程,并终止与该进程相关的任务。
- 重新启动计算机并重新开始。有时问题应该以这种方式解决。
但是,如果您以某种方式自定义了 localGPT,则需要注意这样做的潜在问题。
Thanks for your reply.
No problem. Welcome.