torch not compiled
torch not complied with cuda
conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.6 -c pytorch -c conda-forge
conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.6 -c pytorch -c conda-forge
Already Tried this, Same assertion error:
AssertionError: Torch not compiled with CUDA enabled
I have CUDA 11.7 installed. Do I need to downgrade to 11.6?
EDIT: This does seem to be the case. Now unless you have multiple GPUs on your system you will most likely encounter a GPU ordinal error. This can be remedied by changing the following code in visual_chatgpt.py:
print("Initializing VisualChatGPT")
self.llm = OpenAI(temperature=0)
self.edit = ImageEditing(device="cuda:6")
self.i2t = ImageCaptioning(device="cuda:4")
self.t2i = T2I(device="cuda:1")
self.image2canny = image2canny()
self.canny2image = canny2image(device="cuda:1")
self.image2line = image2line()
self.line2image = line2image(device="cuda:1")
self.image2hed = image2hed()
self.hed2image = hed2image(device="cuda:2")
self.image2scribble = image2scribble()
self.scribble2image = scribble2image(device="cuda:3")
self.image2pose = image2pose()
self.pose2image = pose2image(device="cuda:3")
self.BLIPVQA = BLIPVQA(device="cuda:4")
self.image2seg = image2seg()
self.seg2image = seg2image(device="cuda:7")
self.image2depth = image2depth()
self.depth2image = depth2image(device="cuda:7")
self.image2normal = image2normal()
self.normal2image = normal2image(device="cuda:5")
self.pix2pix = Pix2Pix(device="cuda:3")
self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
to this:
print("Initializing VisualChatGPT")
self.llm = OpenAI(temperature=0)
self.edit = ImageEditing(device="cuda:0")
self.i2t = ImageCaptioning(device="cuda:0")
self.t2i = T2I(device="cuda:0")
self.image2canny = image2canny(device="cuda:0")
self.canny2image = canny2image(device="cuda:0")
self.image2line = image2line(device="cuda:0")
self.line2image = line2image(device="cuda:0")
self.image2hed = image2hed(device="cuda:0")
self.hed2image = hed2image(device="cuda:0")
self.image2scribble = image2scribble(device="cuda:0")
self.scribble2image = scribble2image(device="cuda:0")
self.image2pose = image2pose(device="cuda:0")
self.pose2image = pose2image(device="cuda:0")
self.BLIPVQA = BLIPVQA(device="cuda:0")
self.image2seg = image2seg(device="cuda:0")
self.seg2image = seg2image(device="cuda:0")
self.image2depth = image2depth(device="cuda:0")
self.depth2image = depth2image(device="cuda:0")
self.image2normal = image2normal(device="cuda:0")
self.normal2image = normal2image(device="cuda:0")
self.pix2pix = Pix2Pix(device="cuda:0")
self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
But this is ONLY assuming you only have 1 GPU like me, at which point you will probably encounter an OOM error like I did just now.
same here! for Macbookpro.
I think "CUDA is not supported with Mac, because Macs typically don't have nvidia GPUs."
Can i use cpu instead of gpu?
EDIT: with this code i managed to use the cpu
python visual_chatgpt.py --load ImageCaptioning_cpu,Text2Image_cpu
Can i use cpu instead of gpu?
I don't think so. When you compile pytorch without CUDA it raises the Assertion Error.
ERROR: Could not find a version that satisfies the requirement cudatoolkit~=11.3 (from controlnet) (from versions: none) ERROR: No matching distribution found for cudatoolkit~=11.3