<p>oh and (1) it will work as long as evidence etc., is synchronous, (2) it will be really inefficient - be glad ebi doesn't use a by group compute time fair-share policy ;) </p>
<p>Dan</p>
<p>from me phone...</p>
<div class="gmail_quote">On Mar 19, 2013 12:13 PM, "Daniel Hughes" <<a href="mailto:dsth@ebi.ac.uk">dsth@ebi.ac.uk</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<p>You really don't need to know anything about MPI. While MPI is itself pretty complex, I seem to recall maker uses the p2p subset alone mainly to send serialised perl objects as c strings etc., for IPC across ad hoc infrastructure - but none of that is relevant as Carson has done all the IPC debugging for you and its use should be transparent. If it's failing, its almost certainly because you've got discrepencies between the mpi libraries visible at compile-time vs. run-time and you may need to force the dynamic linker to behave itself. The only other caveat on ebi infrastructure i can think of off the top of my head relates to cross-node MPI usage when going into the hundreds of processes but i'm assuming you not doing that? You need to be more specific about how it's failing.</p>
<p>dan</p>
<p>from me phone...</p>
<div class="gmail_quote">On Mar 19, 2013 11:55 AM, "Michael Nuhn" <<a href="mailto:mnuhn@ebi.ac.uk" target="_blank">mnuhn@ebi.ac.uk</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello Carson!<br>
<br>
On 03/19/2013 02:27 PM, Carson Holt wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Yes. If at all possible use MPI. It removes the overhead of locks<br>
which happen per primary instance of MAKER. So one maker job using 1000<br>
cpus via MPI will have one shared set of locks. 1000 serial instances<br>
of MAKER on the other hand would have 1000x the locks.<br>
</blockquote>
<br>
I don't know a thing about MPI.<br>
<br>
I tried installing maker (2.2.7) with mpich-3.0.2, mpich2-1.4.1 and open mpi and none of them worked for me. I also tried the automatic installation that comes with maker, but it didn't work for me either.<br>
<br>
If need be, I could spend time getting to the bottom of this, but there is no telling how long this would take me so I'd rather not, if there is an alternative.<br>
<br>
Would the approach I outlined before work? (Treating the split files as separate genomes to annotate and then combine the gffs afterwards)<br>
<br>
I also like this approach, because I would select a few contigs in the beginning which I would run on their own. They would complete early and this way I would get a preview of the results of the run instead of having to wait for everything to complete.<br>
<br>
It might also be more robust, because file locking issues would be confined to the instances working on a sequence chunk, but the rest of the instances could continue working.<br>
<br>
Cheers,<br>
Michael.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Alternatively if you do need to continue without MPI for some reason, I<br>
just finished a devel version of MAKER that has a --no_locks option.<br>
You can never start two instances using the same input fasta when<br>
--no_locks is specified, but the splitting to use different input fastas<br>
I mentioned before in the example will still work fine.<br>
<br>
I also have updated the indexing/reindexing, so if indexing failures<br>
happen, MAKER will switch between the current working directory and the<br>
TMP= directory from the maker_opts.ctl file so as to try different IO<br>
locations (I.e. NFS and non-NFS). Note you should never set TMP= in the<br>
control files to an NFS mounted location (it not only makes things a lot<br>
slower, but berkleydb and sqllite will get frequent errors on NFS).<br>
TMP= defaults to /tmp when not specified<br>
<br>
I'll send you download information in a separate e-mail. Try a regular<br>
MAKER run to see if the indexing/reindexing changes are sufficient<br>
before attempting the —no_locks option.<br>
<br>
Thanks,<br>
Carson<br>
</blockquote>
<br>
</blockquote></div>
</blockquote></div>