visual-chatgpt
visual-chatgpt copied to clipboard
Can we have a windows install guide?
Was wondering why it just did not work until I noticed half of the install instrucions is for linux-.-
if u are a user of Windows, u can open your windows function named Window subsystem for Linux
or 适用于Linux的Window子系统
in Windows Function
or Windows 功能
mac 电脑怎么安装啊
I successfully make it running on Win10. The key point is to set the python version to 3.8.1 in order to install several requirements. Then find the right builds for torch1.12.1+cu113,torchvision+cu etc. in place of torch... in the requirements. Next is to finish the command in the download.sh manually with the Windows powershell style. Set the env OPENAI_API_KEY in the system configuration. Finally edit the self.tool section in visual_chatgpt.py to make sure there is enough VRAM to run.
I successfully make it running on Win10. The key point is to set the python version to 3.8.1 in order to install several requirements. Then find the right builds for torch1.12.1+cu113,torchvision+cu etc. in place of torch... in the requirements. Next is to finish the command in the download.sh manually with the Windows powershell style. Set the env OPENAI_API_KEY in the system configuration. Finally edit the self.tool section in visual_chatgpt.py to make sure there is enough VRAM to run.
i get a error,can u help me AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import) it looks like something erro in cv2?
AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import)
It seems like your current file wound import the ‘cv2’, while the latter will also import the former. I guess that you have built a file named ‘cv2’ in your current file path, which conflicts with the package ‘cv2’
I successfully make it running on Win10. The key point is to set the python version to 3.8.1 in order to install several requirements. Then find the right builds for torch1.12.1+cu113,torchvision+cu etc. in place of torch... in the requirements. Next is to finish the command in the download.sh manually with the Windows powershell style. Set the env OPENAI_API_KEY in the system configuration. Finally edit the self.tool section in visual_chatgpt.py to make sure there is enough VRAM to run.
could you please tell me what the system configuration is ?
could you please tell me what the system configuration is ?
Well, I guess u're a Windows user. The system configuration
is the environment variables
or 环境变量
for Windows. U can set environment variables
referring to this articel or this one for Chinese people
Though the above way u can set the env OPENAI_API_KEY permanently. Also, if you don't want too much trouble, u can enter set OPENAI_API_KEY={Your_Private_Openai_Key}
to set OPENAI_API_KEY temporarily (reference).
I successfully make it running on Win10. The key point is to set the python version to 3.8.1 in order to install several requirements. Then find the right builds for torch1.12.1+cu113,torchvision+cu etc. in place of torch... in the requirements. Next is to finish the command in the download.sh manually with the Windows powershell style. Set the env OPENAI_API_KEY in the system configuration. Finally edit the self.tool section in visual_chatgpt.py to make sure there is enough VRAM to run.
I managed doing everything on win10 following your advices. I execute the script through anaconda3 and it goes smoothly but it gets stuck and got frozen after saying that it is running on localhost...could it be lack of vram? How much it is needed? Thanks
I successfully make it running on Win10. The key point is to set the python version to 3.8.1 in order to install several requirements. Then find the right builds for torch1.12.1+cu113,torchvision+cu etc. in place of torch... in the requirements. Next is to finish the command in the download.sh manually with the Windows powershell style. Set the env OPENAI_API_KEY in the system configuration. Finally edit the self.tool section in visual_chatgpt.py to make sure there is enough VRAM to run.
I managed doing everything on win10 following your advices. I execute the script through anaconda3 and it goes smoothly but it gets stuck and got frozen after saying that it is running on localhost...could it be lack of vram? How much it is needed? Thanks
Did it say 'Running on local URL: http://0.0.0.0:7860'? It means you have successfully run it. You just need to open 127.0.0.1:7860 or localhost:7860 in web browser.
Thanks! It worked but I got a different issue now...I'll open a new thread
QAQ