<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">We run on a standard cluster. We have traditional NFS as well as more advanced Lustre options for shared storage. Each node has both locally mounted disk and in memory storage available (I never use the in memory storage though because MAKER requires a lot of temporary storage).<div class=""><br class=""></div><div class="">I run using OpenMPI (it scales better than MPICH2 - also MAKER is incompatible with MVAPICH2 because of a known registered memory defect in that MPI flavor). We use the SLURM scheduler although previously we had PBS. I usually run job sizes of between 100 and 200 CPU cores (10 to 20 nodes). We have mixed node types of 12, 16, 20. and 24 core nodes.</div><div class=""><br class=""></div><div class="">I always set TMP= to a locally mounted disk (never NFS or RAM disk). The working directory is always NFS or Lustre.</div><div class=""><br class=""></div><div class="">I've also run under a similar configuration on the TACC and XSEDE clusters (<a href="https://www.xsede.org" class="">https://www.xsede.org</a>). They use SLURM and previously SGE for their scheduler. I’ve been able to run on 600 plus CPU cores per job there, but I get better efficiency with multiple jobs at ~200 CPU cores (communication overhead gets too high for a single root process to handle effectively above 200 cores).</div><div class=""><br class=""></div><div class="">MAKER will need ~2 Gb of RAM for every core you give it with MPI.</div><div class=""><br class=""></div><div class="">—Carson</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Mar 3, 2016, at 4:01 AM, Florian <<a href="mailto:fdolze@students.uni-mainz.de" class="">fdolze@students.uni-mainz.de</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hello Carson,<br class=""><br class="">May I ask on what kind of hardware setup you guys are running MAKER?<br class=""><br class="">I cant seem to get this running performantly on our cluster. There are usually only 2-3 cores running on 100% and the rest is idle waiting (I THINK due to I/O blockage but I'm not sure). Any ideas how I could find the cause for this problem?<br class=""><br class="">I attached a screenshot of the node status for the first hour of the last MAKER run if this is any help.<br class=""><br class="">On 29.02.2016 20:09, Carson Holt wrote:<br class=""><blockquote type="cite" class="">You can try setting TMP= in the control files to a RAM disk location (You will need a lot of RAM though, perhaps 500Gb). Even then some components used by MAKER may not function properly with tmpfs, but you can try. If it doesn’t work you’ll get an error. The main output directory on the other hand must be globally accessible to all nodes if working with MPI, and a RAM disk will only exist and be accessible on a single node (even though a directory with the same name may exists on multiple nodes, they will actually be separate and distinct locations, i.e. /dev/shm).<br class=""><br class="">—Carson<br class=""><br class=""><br class=""><blockquote type="cite" class="">On Feb 26, 2016, at 7:16 AM, Florian<<a href="mailto:fdolze@students.uni-mainz.de" class="">fdolze@students.uni-mainz.de</a>> wrote:<br class=""><br class="">Hi all,<br class=""><br class="">I am trying to run maker on a cluster (2 nodes with 64 cores each), to speed things up I copied all input files to a ramdisk to reduce I/O time, but all subsequent results are still written to hdd.<br class=""><br class="">Is there a way I can tell maker to write the maker.results files to ramdisk (or generally any other directory than the current working dir) too? (are they actually used for the current run or are only files in the temp files location used?)<br class=""><br class="">Is anybody experienced with running maker on a similar setup and could tell me how you are handling this?<br class=""><br class=""><br class="">thanks,<br class="">Florian<br class=""><br class="">_______________________________________________<br class="">maker-devel mailing list<br class=""><a href="mailto:maker-devel@box290.bluehost.com" class="">maker-devel@box290.bluehost.com</a><br class="">http://box290.bluehost.com/mailman/listinfo/maker-devel_yandell-lab.org<br class=""></blockquote></blockquote><br class=""><span id="cid:BBBBAEA4-3D8B-4DBA-8AAB-81908485FDB5@hsd1.ut.comcast.net."><Screenshot from 2016-03-03 11:35:41.png></span></div></div></blockquote></div><br class=""></div></body></html>