Chun Cai
Chun Cai
> The ci test in intel is calculated in: https://bohrium.dp.tech/apps/abacustest/job?type=jobs&jobId=21991 @pxlxingliang I'm not able to see the result by the link you've provided. I wonder if there is still any...
@SyntaxSmith Would you try another compiler, or use abacus toolchain.
@WHUweiqingzhou Please try `git submodule update --remote --rebase`, since LibComm is a git submodule.
@WHUweiqingzhou Any trouble here?
@AsTonyshment Please set env var `CUDAFLAG="-allow-unsupported-compiler"`. Although these compilers are not supported officially, generally it's OK to do so as long as the program works.
Hi @well-zt , You can try setting `gradient_max_norm` parameter in input.json. The related codes are here: https://github.com/deepmodeling/deepmd-kit/blob/d162d0b47b51ea4aee56acbafb0c30fbc1dcdb82/deepmd/pt/train/training.py#L689-L694 [Ref](https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html#torch.nn.utils.clip_grad_norm_)
> [@caic99](https://github.com/caic99) L1, L2 regularization is a common practice in ML training but not common in MLIP training, should we put `how (and why) to use regularization in loss function`...
Fixed by #4702
@scott-5 Would you try installling ABACUS on the latest branch? Sorry for missing this feedback.
> our code @QuantumMisaka This seems to be a problem from the legacy scalapack code, so there's not much we can do here.