[maker-devel] MPI selection

admin at genome.arizona.edu admin at genome.arizona.edu
Tue Jan 30 10:24:05 MST 2018


Carson Holt wrote on 01/30/2018 09:47 AM:
 > The libraries used by MVAPICH2, Intel MPI, and OpenMPI to access 
infiniband have a known bug. For performance reasons, infiniband 
libraries use registered memory in a way that makes it impossible to do 
system calls to external programs under MPI (doing so results in seg 
faults). MAKER has to call out to external programs like BLAST, 
exonerate, etc., so it triggers this bug.
 > The infiniband bug is well known, and unfortunately will not be fixed 
because fixing it causes infiniband to lose some advertised features 
like direct memory access.


Well that stinks!  Maybe that's why we got such a good deal on 
new-old-stock infiniband equipment!  Still it has allowed us to use full 
speed of our NFS RAIDs, which has been nice.  I will try with using ib0, 
the speed is still about 10Gb, but I was under the impression using 
IPoIB would cause packet loss or other problems...

Thanks for clearing that up.  So is there a fabric/protocol you would 
recommend for clusters running maker?





More information about the maker-devel mailing list