-
Notifications
You must be signed in to change notification settings - Fork 32
NCCL SHARP plugin
- Mellanox ConnectX 6 HCA and Mellanox Quantum IB switch with SHARP support
- Nvidia CUDA Toolkit
- Mellanox OFED >= 5.0-2.x
- Nvidia NCCL >= 2.7.3.
-
Mellanox HPC-X >= 2.6.2
- (Note: HPC-X contains Mellanox SHARP library and latest stable NCCL SHARP plugin)
-
GPUDirectRDMA driver
- More details on GPUDirect RDMA https://www.mellanox.com/products/GPUDirect-RDMA
Please make sure that all requirements are satisfied.
- Download and load HPC-X for your OS and MOFED versions from https://www.mellanox.com/products/hpc-x-toolkit
OS_DISTRO=ubuntu18.04-x86_64
MLNX_OFED=MLNX_OFED_LINUX-5.0-1.0.0.0
wget http://content.mellanox.com/hpc/hpc-x/v2.6/hpcx-v2.6.0-gcc-${MLNX_OFED}-${OS_DISTRO}.tbz -O hpcx.tbz
tar xjf hpcx.tbz
module use hpcx-v2.6.0-gcc-${MLNX_OFED}-${OS_DISTRO}/modulefiles
module load hpcx
- Build NCCL-SHARP plugin
- With UCX support
% git clone https://github.com/Mellanox/nccl-rdma-sharp-plugins
% cd nccl-rdma-sharp-plugins
% ./autogen.sh
% ./configure --with-ucx=$UCX_DIR --with-sharp=$HPCX_SHARP_DIR
% make
% make install
Check wiki page on UCX support NCCL UCX plugin
- Without UCX support
% git clone https://github.com/Mellanox/nccl-rdma-sharp-plugins
% cd nccl-rdma-sharp-plugins
% ./autogen.sh
% ./configure --with-sharp=$HPCX_SHARP_DIR --without-ucx
% make
% make install
Before start follow SHARP deployment guide to set up SHARP environment.
NCCL automatically picks up network plugin when it available in library search path. Note that HPCX 2.6 already contains latest stable NCCL Plugin and sets LD_LIBRARY_PATH when loaded. Additionally to enable collnet support in NCCL add NCCL_COLLNET_ENABLE=1 environment variable
# libnccl_net.so is in <plugin_install_dir>/lib
% module load hpcx
% export LD_LIBRARY_PATH=<plugin_install_dir>/lib:$LD_LIBRARY_PATH
% mpirun -x LD_LIBRARY_PATH -x NCCL_COLLNET_ENABLE=1 nccl-tests/build/all_reduce_perf -b 128 -e 512M
Following table holds complete list of variables that control plugin behavior:
Variable Name | Possible Values | Description | Default |
---|---|---|---|
NCCL_COLLNET_ENABLE | 0, 1 | Disables/enables collnet support in NCCL | 0 |
NCCL_PLUGIN_P2P | ib, ucx | Specify what point to point layer will be used | ib |
NCCL_SHARP_MAX_COMMS | 0, 1, 2 | Max number of NCCL communicators that uses SHARP. First NCCL_SHARP_MAX_COMMS communicators will be created with SHARP support | 1 |
NCCL_IB_PCI_RELAXED_ORDERING | 0, 1 | Disables/enables PCIe relaxed ordering support for ib. Useful when running in virtual environment as it allows to use GPUDirect RDMA in case of PCI ACS is enabled | 0 |
Refer to Mellanox SHARP on SHARP environment variables.
GPUDirect RDMA driver is highly recommended to be installed and use together with NCCL SHARP plugin as it allows to achieve the best possible performance. However depending on the server hardware configuration the rationale to use GPUDirect RDMA vary, particularly the devices on the path from GPU to HCA and the number of such devices is of great importance.
- The configuration in which GPU and HCA are connected to the same PCIe switch is optimal and yields to the best performance.
- In case if there is CPU/IOH on the path between GPU and HCA it's possible to used GDR RDMA but performance degradation is possible. See GPU Direct description for further details. Also NCCL library by default disable GDR RDMA in that case, to control this behavior use NCCL_NET_GDR_LEVEL variable
- If path between GPU and HCA traverses QPI GPUDirect is not guaranteed to work stable.
To see your server configuration check lspci and nvidia-smi topo -m output
% nvidia-smi topo -m
#nvidia-smi topo -m
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 mlx5_0 mlx5_1 mlx5_2 mlx5_3 CPU Affinity
GPU0 X NV1 NV1 NV2 NV2 SYS SYS SYS PIX PHB SYS SYS 0-19,40-59
GPU1 NV1 X NV2 NV1 SYS NV2 SYS SYS PIX PHB SYS SYS 0-19,40-59
GPU2 NV1 NV2 X NV2 SYS SYS NV1 SYS PHB PIX SYS SYS 0-19,40-59
GPU3 NV2 NV1 NV2 X SYS SYS SYS NV1 PHB PIX SYS SYS 0-19,40-59
GPU4 NV2 SYS SYS SYS X NV1 NV1 NV2 SYS SYS PIX PHB 20-39,60-79
GPU5 SYS NV2 SYS SYS NV1 X NV2 NV1 SYS SYS PIX PHB 20-39,60-79
GPU6 SYS SYS NV1 SYS NV1 NV2 X NV2 SYS SYS PHB PIX 20-39,60-79
GPU7 SYS SYS SYS NV1 NV2 NV1 NV2 X SYS SYS PHB PIX 20-39,60-79
mlx5_0 PIX PIX PHB PHB SYS SYS SYS SYS X PHB SYS SYS
mlx5_1 PHB PHB PIX PIX SYS SYS SYS SYS PHB X SYS SYS
mlx5_2 SYS SYS SYS SYS PIX PIX PHB PHB SYS SYS X PHB
mlx5_3 SYS SYS SYS SYS PHB PHB PIX PIX SYS SYS PHB X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
For NCCL-UCX performance benchmarking NCCL Tests can be used
% module load hpcx
% mpirun \
-np 16 \
--bind-to none \
-x LD_LIBRARY_PATH \
-x NCCL_COLLNET_ENABLE \
-x NCCL_IB_HCA=mlx5_0:1 \
-x SHARP_COLL_ENABLE_SAT=1 \
$NCCL_TEST_HOME/build/all_reduce_perf -b 128 -e 512M -f 2 -g 1 -n 50 -w 50 -p 0 -z 0 -t 1 -c 1
#
# out-of-place in-place
# size count type redop time algbw busbw error time algbw busbw error
# (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
[host01:0:12361 - comm.c:391] INFO [group#:0] group id:a6 tree idx:0 tree_type:LLT rail_idx:0 group size:4 quota: (osts:64 user_data_per_ost:1024) mgid: (subnet prefix:0x0 interface id:0x0) mlid:0
[host01:0:12361 - comm.c:391] INFO [group#:1] group id:a6 tree idx:1 tree_type:SAT rail_idx:0 group size:4 quota: (osts:64 user_data_per_ost:0) mgid: (subnet prefix:0x0 interface id:0x0) mlid:0
128 32 float sum 23.12 0.01 0.01 2e-07 21.84 0.01 0.01 2e-07
256 64 float sum 23.60 0.01 0.02 2e-07 22.73 0.01 0.02 2e-07
512 128 float sum 23.45 0.02 0.04 2e-07 23.47 0.02 0.04 2e-07
1024 256 float sum 25.00 0.04 0.08 7e-07 25.29 0.04 0.08 7e-07
2048 512 float sum 26.65 0.08 0.15 7e-07 26.80 0.08 0.15 7e-07
4096 1024 float sum 30.69 0.13 0.26 7e-07 29.47 0.14 0.27 7e-07
8192 2048 float sum 37.71 0.22 0.42 7e-07 35.07 0.23 0.45 7e-07
16384 4096 float sum 45.70 0.36 0.69 7e-07 42.67 0.38 0.74 7e-07
32768 8192 float sum 61.59 0.53 1.03 7e-07 57.53 0.57 1.10 7e-07
65536 16384 float sum 77.87 0.84 1.63 7e-07 77.27 0.85 1.64 7e-07
131072 32768 float sum 137.3 0.95 1.85 7e-07 134.8 0.97 1.88 7e-07
262144 65536 float sum 161.4 1.62 3.15 7e-07 159.8 1.64 3.18 7e-07
524288 131072 float sum 213.9 2.45 4.75 7e-07 209.1 2.51 4.86 7e-07
1048576 262144 float sum 282.5 3.71 7.19 7e-07 284.2 3.69 7.15 7e-07
2097152 524288 float sum 419.6 5.00 9.68 7e-07 420.7 4.99 9.66 7e-07
4194304 1048576 float sum 624.1 6.72 13.02 7e-07 622.6 6.74 13.05 7e-07
8388608 2097152 float sum 1035.9 8.10 15.69 7e-07 1038.4 8.08 15.65 7e-07
16777216 4194304 float sum 1884.0 8.90 17.25 7e-07 1871.2 8.97 17.37 7e-07
33554432 8388608 float sum 3488.1 9.62 18.64 7e-07 3474.6 9.66 18.71 7e-07
67108864 16777216 float sum 6545.1 10.25 19.87 7e-07 6560.9 10.23 19.82 7e-07
134217728 33554432 float sum 12603 10.65 20.63 7e-07 12627 10.63 20.59 7e-07
268435456 67108864 float sum 24741 10.85 21.02 7e-07 24766 10.84 21.00 7e-07
536870912 134217728 float sum 49014 10.95 21.22 7e-07 49032 10.95 21.21 7e-07
# Out of bounds values : 0 OK
# Avg bus bandwidth : 7.7601
#