charm4py
charm4py copied to clipboard
GPU-Aware Communication
Goal: Use UCX for both intra-node and inter-node GPU communication.
Building and Running on OLCF Summit
Prerequisites: OpenPMIx, CUDA-enabled UCX
Building OpenPMIx
- Requires libevent
- wget https://github.com/libevent/libevent/releases/download/release-2.1.12-stable/libevent-2.1.12-stable.tar.gz
- tar -xf libevent-2.1.12-stable.tar.gz
- cd libevent-2.1.12-stable
- mkdir build && mkdir install
- cd build
- ../configure --prefix=$HOME/libevent-2.1.12-stable/install
- make -j && make install
- wget https://github.com/openpmix/openpmix/releases/download/v3.1.5/pmix-3.1.5.tar.gz
- tar -xf pmix-3.1.5.tar.gz
- cd pmix-3.1.5
- mkdir build install
- cd build
- ../configure --prefix=$HOME/work/pmix-3.1.5/install --with-libevent=$HOME/libevent-2.1.12-stable/install
- make -j && make install
Building CUDA-enabled UCX
Commit 971aad12d142341770c8f918cb91727cd180cb31
of master branch recommended, v1.9.0 has issues with ucx_perftest
on Summit, latest commit breaks CUDA linkage somehow.
- git clone [email protected]:openucx/ucx.git
- cd ucx
- git checkout 971aad12d142341770c8f918cb91727cd180cb31
- ./autogen.sh
- mkdir build install
- cd build
- ../contrib/configure-release --prefix=$HOME/ucx/install --with-cuda=$CUDA_DIR --with-gdrcopy=/sw/summit/gdrcopy/2.0
- make -j
- make install
Building Charm4Py with UCX
The following diff should be applied to the Charm++ repository, and the paths specified should be changed to the local installation: location
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 724e6d8d7..70703c450 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -865,7 +865,7 @@ if(${TARGET} STREQUAL "charm4py")
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
add_library(charm SHARED empty.cpp)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib_so)
- target_link_libraries(charm ck converse memory-default threads-default ldb-rand "-Llib/ -standalone -whole-archive -c++stl -shared")
+ target_link_libraries(charm ck converse memory-default threads-default ldb-rand "-L/ccs/home/jchoi/work/ucx-1.9.0/install/lib -L/ccs/home/jchoi/work/pmix-3.1.5/install/lib -L/sw/summit/gdrcopy/2.0/lib64 -Llib/ -lpmix -lucp -lucm -lucs -luct -lgdrapi -standalone -whole-archive -c++stl -shared")
add_dependencies(charm hwloc)
endif()
(charm4py) [zanef2@login1]~/charms/charm% git diff > diff
(charm4py) [zanef2@login1]~/charms/charm% cat diff
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 724e6d8d7..70703c450 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -865,7 +865,7 @@ if(${TARGET} STREQUAL "charm4py")
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
add_library(charm SHARED empty.cpp)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib_so)
- target_link_libraries(charm ck converse memory-default threads-default ldb-rand "-Llib/ -standalone -whole-archive -c++stl -shared")
+ target_link_libraries(charm ck converse memory-default threads-default ldb-rand "-L/ccs/home/jchoi/work/ucx-1.9.0/install/lib -L/ccs/home/jchoi/work/pmix-3.1.5/install/lib -L/sw/summit/gdrcopy/2.0/lib64 -Llib/ -lpmix -lucp -lucm -lucs -luct -lgdrapi -standalone -whole-archive -c++stl -shared")
add_dependencies(charm hwloc)
endif()
The install directories of OpenPMIx and UCX should be passed in with --basedir
.
./build charm4py ucx-linux-ppc64le cuda openpmix -j -g --with-production --basedir=$HOME/work/pmix-3.1.5/install --basedir=$HOME/work/ucx-master/install
Then, Charm4Py can be installed normally:
python3 -m pip install --user .
Running Charm4Py with UCX
You can check if UCX is picking up CUDA and GDRCOPY modules properly on the compute nodes by running jsrun -n1 ucx_info -d | grep cuda
and jsrun -n1 ucx_info -d | grep gdr
.
You may need to pass in --smpiargs="-disable_gpu_hooks"
to jsrun
if you observe any CUDA hook library failure messages.
Running the Charm4Py GPU latency benchmark (between 2 GPUs, intra-socket): jsrun -n2 -a1 -c2 -g1 -K2 -r2 --smpiargs="-disable_gpu_hooks" ./latency +ppn 1 +pemap L0,8 +commap L4,12
You can change the rendezvous threshold by using the UCX_RNDV_THRESH
environment variable. The values that I found to work best for the OSU benchmarks are 131072
for intra-socket, 65536
for inter-socket, and 524288
for inter-node. Note that a too small value (less than 64
in my tests) will cause hangs, probably due to the UCX layer implementation in Charm++.
Charm4Py API
The Charm4Py implementation uses the Channels API. When a channel has been created between chares, there are two options for sending GPU-direct messages: by passing the buffers themselves and by passing arrays containing the pointers/sizes of the arrays. The latter is an optimization when the same buffers are used for communication multiple times, as the cost of determining the address/size of the buffers is paid only once; this optimiziation saves ~20us for each message.
Direct Access
Assume that partner_channel
is a channel between two chares, and that d_data_send
and d_data_recv
are arrays implementing the CUDA Array Interface. To send these arrays through the channel, the following can be used:
# Called on the sender:
partner_channel.send(d_data_send)
# Called on the receiver:
partner_channel.recv(d_data_recv)
Note that multiple arrays can be sent, and that combinations of GPU and host parameters are allowed.
Persistent Communication Optimization
The Direct Access method extracts the address and size of each array using the CUDA Array Interface. Many applications use the same buffer for communication many times, and using the Direct Access the address and size must be extracted each time the Array is used. While we plan to implement a cache to optimize for these situations, we currently offer a workaround that allows to provide this information to the runtime system.
d_data_recv_addr = array.array('L', [0])
d_data_recv_size = array.array('i', [0])
d_data_send_addr = array.array('L', [0])
d_data_send_size = array.array('i', [0])
partner_channel.send(src_ptrs = d_data_send_addr, src_sizes = d_data_send_size)
partner_channel.recv(post_addresses = d_data_recv_addr,
post_sizes = d_data_recv_size
)
References
https://github.com/openucx/ucx/wiki/NVIDIA-GPU-Support https://openucx.readthedocs.io/en/master/faq.html#working-with-gpu
This pull request introduces 9 alerts when merging ba3e95ce49a42d2226c6e4e74098f99109993568 into d95c29d23bfc2a743cac421b6cb81c8718f9935d - view on LGTM.com
new alerts:
- 4 for Unused import
- 2 for Unused local variable
- 2 for 'import *' may pollute namespace
- 1 for Unreachable code