BeanDAO
BeanDAO
gradio_app_sdxl_specific_id_low_vram文件同时兼容cuda/mps/cpu.已经在Mac m1芯片上测试通过。其中Mac平台不支持enable_model_cpu_offload()操作。
Int8量化模型
发现一个针对SD的[onnx的int8量化脚本](https://github.com/LowinLi/stable-diffusion-streamlit/blob/main/src/stable-diffusion-streamlit/pages/model/quantization.py)和[量化好的库](https://drive.filen.io/f/2d917512-0566-4903-ab55-e3f3415ed18f#MqhCRo5kCbOcaloWuYKlaLTUaDRhUwMd),这个能转出ncnn的int8模型不?
I am sure that the ollama settings have been configured. I have already verified the same configuration. However, after sending the message in VOID, no response was received,and the ollama...