visual-chatgpt
visual-chatgpt copied to clipboard
运行成功,但打开http://0.0.0.0:7860时提示以下错误,请大家帮忙解决一下!
网址为 http://0.0.0.0:7860/ 的网页可能暂时无法连接,或者它已永久性地移动到了新网址
@IssacaLaw 你是在本地运行的吗,如果是服务器的话是不能直接这样访问的,要么设置share=True
, 或者转发到本地端口进行访问
@IssacaLaw 或者你可以参考一下我的colab notebook https://github.com/K-tang-mkv/visual-chatgpt-googlecolab, 我在visual-chatgpt.py里修改最后一行的启动方式,使其能公共访问。
想问下你们用的什么卡
@IssacaLaw或者你可以参考一下我的colab notebook https://github.com/K-tang-mkv/visual-chatgpt-googlecolab,我在visual-chatgpt.py里修改最后一步的启动方式,使之能够公共访问。
感谢回复,我是本地运行的,但是我按照你的958行的代码修改后,还是这个报错,是还有什么需要修改的吗?
想问下你们用的什么卡
我是用3090ti
请问window下apikey填在哪里
请问window下apikey填在哪里
在你的环境下的命令窗口下执行:set OPENAI_API_KEY={Your_Private_Openai_Key}
老哥问一下bash download.sh这一步你那边显示的什么?
@IssacaLaw或者你可以参考一下我的colab notebook https://github.com/K-tang-mkv/visual-chatgpt-googlecolab,我在visual-chatgpt.py里修改最后一步的启动方式,使之能够公共访问。
感谢回复,我是本地运行的,但是我按照你的958行的代码修改后,还是这个报错,是还有什么需要修改的吗?
如果加了share=True
,那你可以试下公共访问的网址?
老哥问一下bash download.sh这一步你那边显示的什么?
git界面,然后完成contronNet的下载,自动关闭界面。
@IssacaLaw或者你可以参考一下我的colab notebook https://github.com/K-tang-mkv/visual-chatgpt-googlecolab,我在visual-chatgpt.py里修改最后一步的启动方式,使之能够公共访问。
感谢回复,我是本地运行的,但是我按照你的958行的代码修改后,还是这个报错,是还有什么需要修改的吗?
如果加了
share=True
,那你可以试下公共访问的网址?
什么意思,不是进入它提供的url:http://0.0.0.0:7860/ 吗
把0.0.0.0改成localhost或者127.0.0.1 http://localhost:7860/
老哥,我在pip install -r requirement.txt这一步发现,下载第一个都特别慢,而且报错: (visgpt) PS D:\IDM\IDM_download\压缩文件\visual-chatgpt-main\visual-chatgpt-main> pip install -r requirement.txt Collecting torch==1.12.1 Downloading torch-1.12.1-cp38-cp38-win_amd64.whl (161.9 MB) ━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.1/161.9 MB 26.9 kB/s eta 1:32:21 ERROR: Exception: Traceback (most recent call last): File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 438, in _error_catcher yield File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 561, in read data = self._fp_read(amt) if not fp_closed else b"" File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 527, in _fp_read return self._fp.read(amt) if amt is not None else self._fp.read() File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\cachecontrol\filewrapper.py", line 90, in read data = self.__fp.read(amt) File "D:\Anaconda\envs\visgpt\lib\http\client.py", line 459, in read n = self.readinto(b) File "D:\Anaconda\envs\visgpt\lib\http\client.py", line 503, in readinto n = self.fp.readinto(b) File "D:\Anaconda\envs\visgpt\lib\socket.py", line 669, in readinto return self._sock.recv_into(b) File "D:\Anaconda\envs\visgpt\lib\ssl.py", line 1241, in recv_into return self.read(nbytes, buffer) File "D:\Anaconda\envs\visgpt\lib\ssl.py", line 1099, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\cli\base_command.py", line 160, in exc_logging_wrapper
status = run_func(*args)
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\cli\req_command.py", line 247, in wrapper
return func(self, options, args)
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\commands\install.py", line 419, in run
requirement_set = resolver.resolve(
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\resolver.py", line 92, in resolve
result = self._result = resolver.resolve(
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 172, in _add_to_criteria
if not criterion.candidates:
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\resolvelib\structs.py", line 151, in bool
return bool(self._sequence)
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 155, in bool
return any(self)
File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 143, in
老哥,我在pip install -r requirement.txt这一步发现,下载第一个都特别慢,而且报错: (visgpt) PS D:\IDM\IDM_download\压缩文件\visual-chatgpt-main\visual-chatgpt-main> pip install -r requirement.txt Collecting torch==1.12.1 Downloading torch-1.12.1-cp38-cp38-win_amd64.whl (161.9 MB) ━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.1/161.9 MB 26.9 kB/s eta 1:32:21 ERROR: Exception: Traceback (most recent call last): File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 438, in _error_catcher yield File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 561, in read data = self._fp_read(amt) if not fp_closed else b"" File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 527, in _fp_read return self._fp.read(amt) if amt is not None else self._fp.read() File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\cachecontrol\filewrapper.py", line 90, in read data = self.__fp.read(amt) File "D:\Anaconda\envs\visgpt\lib\http\client.py", line 459, in read n = self.readinto(b) File "D:\Anaconda\envs\visgpt\lib\http\client.py", line 503, in readinto n = self.fp.readinto(b) File "D:\Anaconda\envs\visgpt\lib\socket.py", line 669, in readinto return self._sock.recv_into(b) File "D:\Anaconda\envs\visgpt\lib\ssl.py", line 1241, in recv_into return self.read(nbytes, buffer) File "D:\Anaconda\envs\visgpt\lib\ssl.py", line 1099, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\cli\base_command.py", line 160, in exc_logging_wrapper status = run_func(*args) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\cli\req_command.py", line 247, in wrapper return func(self, options, args) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\commands\install.py", line 419, in run requirement_set = resolver.resolve( File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\resolver.py", line 92, in resolve result = self._result = resolver.resolve( File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 481, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 348, in resolve self._add_to_criteria(self.state.criteria, r, parent=None) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 172, in _add_to_criteria if not criterion.candidates: File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\resolvelib\structs.py", line 151, in bool return bool(self._sequence) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 155, in bool return any(self) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 143, in return (c for c in iterator if id(c) not in self._incompatible_ids) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built candidate = func() File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\factory.py", line 206, in _make_candidate_from_link self._link_candidate_cache[link] = LinkCandidate( File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 297, in init super().init( File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 162, in init self.dist = self._prepare() File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 231, in _prepare dist = self._prepare_distribution() File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 308, in _prepare_distribution return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\operations\prepare.py", line 491, in prepare_linked_requirement return self._prepare_linked_requirement(req, parallel_builds) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\operations\prepare.py", line 536, in _prepare_linked_requirement local_file = unpack_url( File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\operations\prepare.py", line 166, in unpack_url file = get_http_url( File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\operations\prepare.py", line 107, in get_http_url from_path, content_type = download(link, temp_dir.path) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\network\download.py", line 147, in call for chunk in chunks: File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\cli\progress_bars.py", line 53, in _rich_progress_bar for chunk in iterable: File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_internal\network\utils.py", line 63, in response_chunks for chunk in response.raw.stream( File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 622, in stream data = self.read(amt=amt, decode_content=decode_content) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 587, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File "D:\Anaconda\envs\visgpt\lib\contextlib.py", line 131, in exit self.gen.throw(type, value, traceback) File "D:\Anaconda\envs\visgpt\lib\site-packages\pip_vendor\urllib3\response.py", line 443, in _error_catcher raise ReadTimeoutError(self._pool, None, "Read timed out.") pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
挂代理了吗,关了试试清华的镜像 pip install -r requirement.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
把0.0.0.0改成localhost或者127.0.0.1 http://localhost:7860/
感谢,已进入UI,原以为0.0.00也是本地地址,所以之前一直没试localhost
把0.0.0.0改成localhost或者127.0.0.1 http://localhost:7860/
但是进入UI后输出一些命令给visgpt,我挂代理的话,显示,
不挂代理的话,显示
把0.0.0.0改成localhost或者127.0.0.1 http://localhost:7860/
但是进入UI后输出一些命令给visgpt,我挂代理的话,显示,
不挂代理的话,显示
我已经放弃本地部署了😥,用别的老哥的colab部署玩玩了
@F4irytale 清华源有用欸,但是卡在了最后五个。另外,我打开pycharm的可用软件包是空白,什么都搜索不到,对于requirement里安装好的包好像也看不到。我不知道是为什么,跟第一步部署的python3.8有关吗?
@F4irytale 清华源有用欸,但是卡在了最后五个。另外,我打开pycharm的可用软件包是空白,什么都搜索不到,对于requirement里安装好的包好像也看不到。我不知道是为什么,跟第一步部署的python3.8有关吗?
我是下的https://www.anaconda.com/这个软件,在命令窗口下执行的前三条语句,双击运行download.sh,再执行set OPENAI_API_KEY={},和后两条语句,用pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113解决cuda兼容问题,在启动文件调整cuda:1~8选择要使用的gpu,最后死在显存不足和网络不通上😂
老哥问一下bash download.sh这一步你那边显示的什么?
我输入bash download.sh显示错误。
(venv) PS G:\Project_Python\visual-chatgpt-main> bash download.sh
download.sh: line 1: git: command not found
ln: failed to create symbolic link './ldm': File exists
ln: failed to create symbolic link './cldm': File exists
ln: failed to create symbolic link './annotator': File exists
download.sh: line 5: cd: ControlNet/models: No such file or directory
download.sh: line 6: wget: command not found
download.sh: line 7: wget: command not found
download.sh: line 8: wget: command not found
download.sh: line 9: wget: command not found
download.sh: line 10: wget: command not found
download.sh: line 11: wget: command not found
download.sh: line 12: wget: command not found
download.sh: line 13: wget: command not found
我输入bash download.sh显示错误。
(venv) PS G:\Project_Python\visual-chatgpt-main> bash download.sh download.sh: line 1: git: command not found ln: failed to create symbolic link './ldm': File exists ln: failed to create symbolic link './cldm': File exists ln: failed to create symbolic link './annotator': File exists download.sh: line 5: cd: ControlNet/models: No such file or directory download.sh: line 6: wget: command not found download.sh: line 7: wget: command not found download.sh: line 8: wget: command not found download.sh: line 9: wget: command not found download.sh: line 10: wget: command not found download.sh: line 11: wget: command not found download.sh: line 12: wget: command not found download.sh: line 13: wget: command not found
安装git再运行 https://git-scm.com/
老哥问一下bash download.sh这一步你那边显示的什么?
我输入bash download.sh显示错误。
(venv) PS G:\Project_Python\visual-chatgpt-main> bash download.sh download.sh: line 1: git: command not found ln: failed to create symbolic link './ldm': File exists ln: failed to create symbolic link './cldm': File exists ln: failed to create symbolic link './annotator': File exists download.sh: line 5: cd: ControlNet/models: No such file or directory download.sh: line 6: wget: command not found download.sh: line 7: wget: command not found download.sh: line 8: wget: command not found download.sh: line 9: wget: command not found download.sh: line 10: wget: command not found download.sh: line 11: wget: command not found download.sh: line 12: wget: command not found download.sh: line 13: wget: command not found
先根据你的系统安装git
, 再安装wget
,
@IssacaLaw
好对的,本地在localhost访问即可,这个我也疏忽了。如果你share=True
的话,它不仅会提供一个私有的本地访问地址,还会有一个public的网址可以访问的。
@F4irytale 清华源有用欸,但是卡在了最后五个。另外,我打开pycharm的可用软件包是空白,什么都搜索不到,对于requirement里安装好的包好像也看不到。我不知道是为什么,跟第一步部署的python3.8有关吗?
我是下的https://www.anaconda.com/这个软件,在命令窗口下执行的前三条语句,双击运行download.sh,再执行set OPENAI_API_KEY={},和后两条语句,用pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113解决cuda兼容问题,在启动文件调整cuda:1~8选择要使用的gpu,最后死在显存不足和网络不通上😂
如果显存不足的话,你可以在self.tools
里选择需要的模型,根据你显存大小来选择合适的模型,checkout this readme https://github.com/microsoft/visual-chatgpt#gpu-memory-usage。或者你可以参考我的colab版本https://github.com/K-tang-mkv/visual-chatgpt-googlecolab
requirement.txt的第3行与第5行指定的版本都装不了,大家怎么处理的,我用的py3.7,有人用3.7版吗?
requirement.txt的第3行与第5行指定的版本都装不了,大家怎么处理的,我用的py3.7,有人用3.7版吗?
换成3.8不就完事了,干嘛坚持3.7。另外你有其他问题最好新开一个issue。
@IssacaLaw 你是在本地运行的吗,如果是服务器的话是不能直接这样访问的,要么设置
share=True
, 或者转发到本地端口进行访问大佬,我也是在colab上跑,最后界面打开了,但打了个招呼后就报错了,如下: RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. 我试着把CUDA版本降到和torch相匹配的11.6,但还是报错,我尝试把你的colab文件也跑了一遍,最后还是这个报错,你知道该怎么解决吗?感激不尽!
你好,这个error出现是因为pytorch和cuda的版本并不兼容,而我的colab项目中torch==1.12.1
cuda=12.0
, 最后是可以成功运行的,确保你开了新notebook运行我的colab版本,应该是没问题的。
老哥能出个教程不。关注你
requirement.txt的第3行与第5行指定的版本都装不了,大家怎么处理的,我用的py3.7,有人用3.7版吗?
换成3.8不就完事了,干嘛坚持3.7。另外你有其他问题最好新开一个issue。
老哥 怎么用Azure的openai啊,我把key换成Azure的也不行,在src文件里的配置文件和代码中都给了试过,还是不行,打印输出根本就没走到那,有办法吗
为什么后面即使是http://localhost:7860,会“localhost 已拒绝连接”哇,也是开了FTP服务了