fmo-mt

Results 6 comments of fmo-mt

M1用Qt打开,修改下TexasSolverGui.pro 加上你 omp.h 的头文件路径 加上 libomp.dylib 的库路径 直接构建就可以了

> > M1用Qt打开,修改下TexasSolverGui.pro > > 加上你 omp.h 的头文件路径 加上 libomp.dylib 的库路径 直接构建就可以了 > > nice 请问知道为什么找不到openmp库的原因吗 装了omp找不到的话可能是环境变量的原因

Still, I found that for OPT models, naive quantization would not cause accuracy drop if we don't quantize `fc1` and `fc2` layers, which means quantize Self-Attenetion layer is just fine...

> Can u use smooth quant to quant llama without accuracy drop? I try to quant the llama-7b, but accuracy also drops a lot. @fmo-mt > > ![image](https://user-images.githubusercontent.com/53092165/247493622-3f540fac-a117-44ab-8ca1-f868ac8b38c7.png) As I...

> Yeah, just load the source code to QT opensource and it should be running. Versions of tool are in readme. I built the latest commit on my M1pro macbook...

you can leave a email here, I'll send you later.