ctq
ctq
Thank you for your great works. We have released the 4-bit GPTQ quantized LLaDA model on Hugging Face: - [LLaDA-8B-Base-GPTQ-4bit](https://huggingface.co/FunAGI/LLaDA-8B-Base-gptqmodel-4bit) - [LLaDA-8B-Instruct-GPTQ-4bit](https://huggingface.co/FunAGI/LLaDA-8B-Instruct-gptqmodel-4bit) Based on the published evaluation code, we have...
Thank you for your great works. We have released the 4-bit GPTQ quantized LLaDA model on Hugging Face: - [LLaDA-8B-Base-GPTQ-4bit](https://huggingface.co/FunAGI/LLaDA-8B-Base-gptqmodel-4bit) - [LLaDA-8B-Instruct-GPTQ-4bit](https://huggingface.co/FunAGI/LLaDA-8B-Instruct-gptqmodel-4bit) Based on the published evaluation code, we have...