ComfyUI_VLM_nodes
ComfyUI_VLM_nodes copied to clipboard
llava quant Not staying loaded
Hey. LOVE these nodes. Llava Optional Memory Free Advanced seems to not actually cache the model between runs. Unload is false. Control after gen, fixed. Am I missing something? running 13b vicuna q6km quant with the associated projector. Thank you.