BenchNBC

MPI Nonblocking Collectives Overlap Benchmark

« back to BenchNBC home.

Results

To interpret results, see the reference article.

» madmpi: MadMPI default configuration, on InfiniBand

» madmpi1DED: MadMPI with 1 dedicated core, on InfiniBand

» mpc: MPC, on InfiniBand

» mpc1DED: MPC with 1 dedicated core, on InfiniBand

» mpcASYNC: MPC with asynchronous progression, on InfiniBand

» mpich: MPICH 3.3, default configuration, using InfiniBand network

» mpich1DED: MPICH 3.3 with MPICH_ASYNC_PROGRESS=1 and application uses n-1 cores, using InfiniBand network

» mpichASYNC: MPICH 3.3 with MPICH_ASYNC_PROGRESS=1, using InfiniBand network

» mvapich: MVAPICH 3.2, using InfiniBand network

» openmpi: OpenMPI 4.0, on InfiniBand, default configuration

» openmpi-bxi: Bull OpenMPI 2.0, on BXI network

» openmpi-omnipath: OpenMPI 4.0, using Intel Omni-Path HFI Silicon 100 Series

» openmpiTCP: OpenMPI 4.0 on TCP

» All results on the same page.

Run the benchmark on your machine, and send us your own results!

Contact

For any questions regarding bench NBC, please contact
	alexandre.denis@inria.fr
	florian.reynier@inria.fr