Wang Xuejin

Results 2 comments of Wang Xuejin

Here are some examples: ![image](https://github.com/xorbitsai/inference/assets/92132848/0daba1ef-971b-4f47-bf01-a83c9351d272) ![image](https://github.com/xorbitsai/inference/assets/92132848/c3377632-0383-4619-8c2e-c34815d80ac0) ![image](https://github.com/xorbitsai/inference/assets/92132848/54abf6de-6d21-4661-80fc-f67a42a28b3b) ![image](https://github.com/xorbitsai/inference/assets/92132848/c76b572a-2ecd-40f1-bf05-e8f27d2ecaca) ![image](https://github.com/xorbitsai/inference/assets/92132848/e654df7e-20cc-461e-ae27-08af7cf7bd7c) ![image](https://github.com/xorbitsai/inference/assets/92132848/ebb2c5ab-fc53-4fdb-a40f-af0423c230aa) ![image](https://github.com/xorbitsai/inference/assets/92132848/bb7c2795-ec6e-4005-b94d-ca0705669d6a)

Using `quantization=None` indeed can launch the model and run successfully. However, even if the model is loaded with `quantization=None`, it will still be converted to `quantization='none'` when creating the model...