ayttop

Results 34 comments of ayttop

https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md

ollama with igpu intel ollama run with igpu intel with https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md

Does AirLLM support intel gpu?

what name of program in image?

how to run llamacpp python on gpu intel?

https://github.com/foldl/chatllm.cpp | [Supported Models](https://github.com/foldl/chatllm.cpp/blob/master/docs/models.md) | [Download Quantized Models](https://github.com/foldl/chatllm.cpp/blob/master/docs/quick_start.md#download-quantized-models) | What's New: 2024-08-28: Phi-3.5 Mini & MoE Inference of a bunch of models from less than 1B to more than 300B,...

https://huggingface.co/microsoft/Phi-3.5-MoE-instruct/discussions/4 microsoft/Phi-3.5-MoE-instruct convert to gguf gguf

phi-3.5-moe-instruct gguf lamacpp???????????????????????