Chi Kim

Results 161 comments of Chi Kim

Basically, I get html using markdown from mistletoe. Then I feed it to pdf.write_html. After debugging for a couple of hours, I finally got the minimal code. ``` from fpdf...

Thanks so much! That worked!

There are abliterated [recent vision models](https://huggingface.co/models?pipeline_tag=image-text-to-text&sort=created&search=abliterated) like llama-3.2-vision, qwen2-vl, qvq, etc. Any tips to replicate? Thanks!

Where can I download the weights for MLX?

Thanks for your response! I used this repo. https://github.com/oobabooga/text-generation-webui Here's my quantized model. https://drive.google.com/drive/folders/1-njjlAXE8JD_UnccZ15geFIMMBU5PZKC After clone, you need to Put the model folder inside text-generation-webui/models. You should have text-generation-webui/models/liuhaotian_llava-llama-2-13b-chat-lightning-preview/llava-llama-2-13b-chat-4bit-128g.safetensors. Here's...

Also I opened an issue on oobabooga/text-generation-webui. https://github.com/oobabooga/text-generation-webui/issues/3293

@AlvinKimata, My apology! I copied a wrong line to launch the server. The loader exllama_hf throws the error you got. Loading with exllama (without_hf) should work. --loader exllama `python server.py...

Whatever worth, I got the quantized model using oobabooga/GPTQ-for-LLaMa to work with AutoGPTQ loader. If you quantize using qwopqwop200/GPTQ-for-LLaMa, it doesn't work. Here's what I did. 1. Modified config.json as...

I'm also interested. @jishengpeng could you please help us to understand what bandwidth_id does? Thanks so much!

+1 It would be good to have ability to finetune for a different language support.