clip using CPU backend
Although I have CUDA enabled and everything installed properly, joycon runs on CPU backend. Any way to fix this? I am running this on Windows with CUDA 12.8 and RTX 3080TI
JoyCaption GGUF: Processing image with JoyCaption Beta One (Q5_K_M) (GPU mode) JoyCaption GGUF: Generating caption... clip_model_loader: model name: vit clip_model_loader: description: image encoder for LLaVA clip_model_loader: GGUF version: 3 clip_model_loader: alignment: 32 clip_model_loader: n_tensors: 434 clip_model_loader: n_kv: 20
clip_model_loader: has vision encoder clip_ctx: CLIP using CPU backend load_hparams: projector: mlp load_hparams: n_embd: 1152 load_hparams: n_head: 16 load_hparams: n_ff: 4304 load_hparams: n_layer: 26 load_hparams: ffn_op: gelu_quick load_hparams: projection_dim: 0
--- vision hparams --- load_hparams: image_size: 384 load_hparams: patch_size: 14 load_hparams: has_llava_proj: 1 load_hparams: minicpmv_version: 0 load_hparams: proj_scale_factor: 0 load_hparams: n_wa_pattern: 0
load_hparams: model size: 837.08 MiB load_hparams: metadata size: 0.15 MiB alloc_compute_meta: CPU compute buffer size = 46.94 MiB encoding image slice... image slice encoded in 6488 ms decoding image batch 1/1, n_tokens_batch = 729 image decoded (batch 1/1) in 39066 m
I have the same thing, cu130 5070
Update: Nightly version fixed it, maybe I was missing requirements since I last used the node
I'm also experiencing this problem. Could you please tell me what the cause is? I've confirmed that the GPU is available, but it's only using the CPU.
I have the same thing, cu130 5070
Update: Nightly version fixed it, maybe I was missing requirements since I last used the node
can you specify where you got the nightly version from?
I have the same thing, cu130 5070 Update: Nightly version fixed it, maybe I was missing requirements since I last used the node
can you specify where you got the nightly version from?
From within the comfyui manager, looking back I did also reinstall llama-cpp-python manually which could have been the main reason. Like I said I was probably missing requirements
I have the same thing, cu130 5070 Update: Nightly version fixed it, maybe I was missing requirements since I last used the node
can you specify where you got the nightly version from?
From within the comfyui manager, looking back I did also reinstall llama-cpp-python manually which could have been the main reason. Like I said I was probably missing requirements
great, will try this out
Same problem here. Uses only CPU. Similar node from another author works perfect but doesn't have Model unload from memory.
Same problem here. Uses only CPU. Similar node from another author works perfect but doesn't have Model unload from memory.
can you share the alternate node?