BenchNBC

MPI Nonblocking Collectives Overlap Benchmark

« back to Bench NBC home.

All Results

» madmpi: MadMPI default configuration, on InfiniBand

» madmpi1DED: MadMPI with 1 dedicated core, on InfiniBand

» mpc: MPC, on InfiniBand

» mpc1DED: MPC with 1 dedicated core, on InfiniBand

» mpcASYNC: MPC with asynchronous progression, on InfiniBand

» mpich: MPICH 3.3, default configuration, using InfiniBand network

» mpich1DED: MPICH 3.3 with MPICH_ASYNC_PROGRESS=1 and application uses n-1 cores, using InfiniBand network

» mpichASYNC: MPICH 3.3 with MPICH_ASYNC_PROGRESS=1, using InfiniBand network

» mvapich: MVAPICH 3.2, using InfiniBand network

» openmpi: OpenMPI 4.0, on InfiniBand, default configuration

» openmpi-bxi: Bull OpenMPI 2.0, on BXI network

» openmpi-omnipath: OpenMPI 4.0, using Intel Omni-Path HFI Silicon 100 Series

» openmpiTCP: OpenMPI 4.0 on TCP

» Mosaic of all results!

madmpi

MadMPI default configuration, on InfiniBand

ˆ back to top

madmpi1DED

MadMPI with 1 dedicated core, on InfiniBand

ˆ back to top

mpc

MPC, on InfiniBand

ˆ back to top

mpc1DED

MPC with 1 dedicated core, on InfiniBand

ˆ back to top

mpcASYNC

MPC with asynchronous progression, on InfiniBand

ˆ back to top

mpich

MPICH 3.3, default configuration, using InfiniBand network

ˆ back to top

mpich1DED

MPICH 3.3 with MPICH_ASYNC_PROGRESS=1 and application uses n-1 cores, using InfiniBand network

ˆ back to top

mpichASYNC

MPICH 3.3 with MPICH_ASYNC_PROGRESS=1, using InfiniBand network

ˆ back to top

mvapich

MVAPICH 3.2, using InfiniBand network

ˆ back to top

openmpi-bxi

Bull OpenMPI 2.0, on BXI network

ˆ back to top

openmpi-omnipath

OpenMPI 4.0, using Intel Omni-Path HFI Silicon 100 Series

ˆ back to top

openmpi

OpenMPI 4.0, on InfiniBand, default configuration

ˆ back to top

openmpiTCP

OpenMPI 4.0 on TCP

ˆ back to top