nullname

Results 104 comments of nullname

> [@chraac](https://github.com/chraac) If it is possible to build llama.cpp QNN backend for laptop? I have Snapdragon X Elite laptop chip which has NPU. Now I check the CMakeLists.txt in ggml-qnn...

> The inference speed on the CPU is optimized and very fast, so there is no noticeable difference even when using the GPU. hmm, depends, 1. usually those devices typically...

> What do you mean by modules? sry, typo, models

> ReasoningCore-3B-T1_1.f16.gguf Not tested on this model yet, but from my experience in llama3-3b, looks there aren't too much mulmat op can be offload for F16 module, cause the for...

> For sue, willing to help verify the functionality! I'm also deepdiving llama.cpp QNN backend support, and I'm willing to help support more ops. nice! create a new issue for...

hi @akshatshah17 , did you successfully run your model now? we've made many change recently, please have another try!

> I don't know this Chinese programmer and I'm not a member of his team and I'd like to see his team's success in this great community. thanks. Yeah, just...

> I didn't provide any support to @chraac and his team. as I said before: I don't know this guy and his team and I'd like to see their success...

> I never drop such a comment in other's PR, this is my first time in this great tech community which is out of mainland China,sorry to waste resource and...

> I already blocked in this community before 02/16/2025 because of **my stupid mistake last year** which part of reasons came from this CN programmer in my first PR and...