Results 115 comments of Denis

Indeed when using `ROCR_VISIBLE_DEVICES` the warning is gone ...

i am using the same script as proposed in the documentation: ``` # CUDA visible devices are ordered inverse to local task IDs # Reference: nvidia-smi topo -m srun --cpu-bind=cores...

Another thing, in both cases when using ``` GPU_AWARE_MPI=amrex.use_gpu_aware_mpi=1 ``` option, the job crashes with UCX errors: ``` [1704917108.050163] [lxbk1120:1097129:0] ib_md.c:309 UCX ERROR ibv_reg_mr(address=0x7f56dd327140, length=6528, access=0x10000f) failed: Invalid argument [1704917108.050189]...

without GPU_AWARE_MPI works perfectly. does it means that GPU AWARE MPI is not being used in AMREX ?

Our "HPC" system is very similar to [SPOCK](https://warpx.readthedocs.io/en/latest/install/hpc/spock.html). We just have 8 AMD GPUs instead of 4 / node. For the record, i investigated in details with the openUCX guys...