Peter Boyle

Results 89 comments of Peter Boyle

What system is this on? Reason for asking is that IBM Spectrum scale MPI causes a segfault like this unless running under mpirun, even for one node.

Run on Develop tonight on Summit V100 (merged SyCL back to develop), getting poorer performance than expected. But runs Grid : Message : 0.343734 s : ==================================================================================================== Grid : Message...

Different Cuda aware MPI's can definitely behave differently about whether you initialise MPI or Cuda first. Are you sure you've got a CUDA aware MPI ?

Hi there, I got fed up with dynamic XML for HMC control. I will also have to deprecate JSON because NVCC doesn't like it in the GPU port, and I'm...

yeah ... I heard >=9 was being bad. See issue 100 ... it's a bit of a GCC graveyard. You might make a comment there.

Hi, intended you to ask Camilo or Patrick for it - they have access. But there's nothing secret as this API is on GitHub, so will attach.

https://raw.githubusercontent.com/paboyle/ComputationalCourseQCD/master/reproducer_IPC_bandwidth.cpp

Sorry for the run around.... all sorts of things forced me to bounce a file through a GitHub repository.

Use CXXFLAGS - there are other neon compilers that are happy with Eigen. Or use the -ccbin flag to nvcc to specify a host compiler that actually works for Grace.