pelesh
pelesh
> @pelesh As of now, fails for me with Ginkgo 1.7.0: [#674 (comment)](https://github.com/LLNL/hiop/issues/674#issuecomment-1858427644) #669 is still work in progress. We are not there yet :).
CC @fritzgoebel
@fritzgoebel, all you need to do is to set GPU examples to use `gpu` mode. In sparse HiOp examples that would look something like this: ```c++ if (use_ginkgo_cuda) { nlp.options->SetStringValue("compute_mode",...
> Set RAJA execution policy to sequential when using RAJA and no GPU backend Shouldn't this read "... when using RAJA and no GPU or OpenMP CPU backend"?
Fixed in #548 and #551.
Fixed in #548 and #551.
I ran some tests with large ACOPF models with MA57 in this branch and convergence was much worse than when run with `develop` branch. I ran it as is without...
CC @fritzgoebel
> @pelesh did you use the same option file in `develop` branch? I didn't change any setting in this PR (see the change log). I tried the latest `develop` branch...
I had a similar issue. In my case `spack config update config` didn't work. I got a message that my configuration does not need updating but the the problem with...