Erik Schnetter
Erik Schnetter
I attach the complete configure and build output from Spack: - [spack-build-env.txt](https://github.com/ornladios/ADIOS2/files/8920152/spack-build-env.txt) - [spack-configure-args.txt](https://github.com/ornladios/ADIOS2/files/8920156/spack-configure-args.txt) - [spack-build-01-cmake-out.txt](https://github.com/ornladios/ADIOS2/files/8920148/spack-build-01-cmake-out.txt) - [spack-build-02-build-out.txt](https://github.com/ornladios/ADIOS2/files/8920149/spack-build-02-build-out.txt)
I have since encountered a very similar problem in `superlu_dist`. This also points to a cmake problem. But what about this line in your `CMakeLists.txt`: ```cmake target_compile_definitions(adios2_fortran PRIVATE "$") ```...
I implemented Julia bindings in a Julia package https://github.com/eschnett/ADIOS2.jl, but calling the C API.
Here is code to reproduce the behaviour I see: ```C #include #include #include void bpwrite() { const size_t izero = 0; const size_t ione = 1; const size_t ilocal =...
I am building Boost 1.75.0. This is how Spack calls Boost: ``` ==> boost: Executing phase: 'install' ==> [2021-01-09-17:59:50.908972] './bootstrap.sh' '--prefix=/autofs/nccs-svm1_home1/schnett/src/CarpetX/spack/opt/spack/linux-rhel7-power9le/gcc-10.2.0/boost-1.75.0-fynzz7kaginesrsesqd44dlbgsil7ege' '--with-toolset=gcc' '--with-libraries=locale,random,wave,atomic,serialization,mpi,iostreams,regex,date_time,exception,log,thread,chrono,graph,system,timer,program_options,filesystem,test,math,graph_parallel' Building B2 engine.. ### ### ### Using...
See also https://github.com/JuliaInterop/libcxxwrap-julia/issues/88.
Thank you, `qt_loop` seems to be doing exactly what I need.
Are you sure that `qt_loop` spreads out the work across all workers? I obtained this output: ``` $ env FUNHPC_NUM_NODES=1 FUNHPC_NUM_PROCS=1 FUNHPC_NUM_THREADS=8 ./hello FunHPC: Using 1 nodes, 1 processes per...
What performance implications does it have to use fewer shepherds? If tasks don't move, how do the shepherds pick up work?
`qt_loop` did not work for me. I am still using my original work-around, which is to start a set of threads, each blocking until all threads are running.