Results 7 comments of Lance

主要是AdGuard的俄语过滤器和banner_container字段冲突了 解决办法 **以下文件均在 站点目录/wp-content/themes/argon/ 下** header.php 422行改为 `` argontheme.js 228行改为 ![image](https://user-images.githubusercontent.com/106385654/210535529-bcc9f014-f0a1-4d39-ad2e-41665b941f7a.png) 两个改动都是在 banner_container 后面加上一个字符a 效果可见 https://blog.lance.fun/

> > 主要是AdGuard的俄语过滤器和banner_container字段冲突了 > > > > > > 解决办法 > > > > > > **以下文件均在 站点目录/wp-content/themes/argon/ 下** > > > > > > header.php 422行改为 > > >...

不建议使用性能低的设备访问噢

I found that the naming convention in the phi-3 metadata (also the tensors) is different from llama, so we can't directly apply quantized-llama Here is the from_gguf func, please check...

I think the real reason caused the problem is this. Firstly, there are two different methods mentioned for conversion: - `convert.py` convert model to gguf with the architecture `llama` always...

It works out fine, but if `quantized_llama` could run all models converted by `convert-hf-to-gguf.py` would be excellent. Here are many models to run, but I must modify the architecture from...

same problem, but after you install [Intel oneAPI Math Kernel Library](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html), it could compile normally ![image](https://github.com/rust-math/intel-mkl-src/assets/106385654/575c5378-6463-4533-a11d-6bb5972f72b9)