KASR
KASR
Hello, Concerning the drilling degree of freedom stiffness, you could try to set the stiffness in the `k_m` routine by using the minimum eigenvalue of the stress-strain matrix. I commented...
> I would appreciate if someone with AVX-512-enabled hardware could also run the official microbenchmark to determine if this is worth merging. According to the [datasheet ](https://ark.intel.com/content/www/us/en/ark/products/198011/intel-xeon-w2295-processor-24-75m-cache-3-00-ghz.html)my cpu has 2...
Might be the same as issue as #735 and #677 and indeed probably related to #603
just as an fyi: i did some benchmark test for another issue ( see https://github.com/ggerganov/llama.cpp/issues/603#issuecomment-1490136086 ) this was done on a [Xeon W 2295](https://ark.intel.com/content/www/us/en/ark/products/198011/intel-xeon-w2295-processor-24-75m-cache-3-00-ghz.html) having 18 physical cores. However, at...
> > the performance was best either a bit below the number of physical cores or a bit above it. > > The best performance is not the aim. >...
> @KASR what's the default install directory for the llama.cpp on windows? Is there a more OS-agnostic way to specify the binary in the command in your script? I don't...
I've tried to do the same as @nicknitewolf I have an [Intel Xeon W-2295](https://ark.intel.com/content/www/us/en/ark/products/198011/intel-xeon-w2295-processor-24-75m-cache-3-00-ghz.html) So I guess on my system there is little to no influence on the performance for...
> Huge thanks @nicknitewolf and @KASR providing some statistics. 👍 🥳 > > I've concluded that unfortunately as my CPU is dog and only has 4 threads total, I can't...
Have a look here --> https://github.com/ggerganov/llama.cpp/discussions/643
@sw I've made the suggested changes. It would be great if some other windows user could test the script on his machine.