[maker-devel] master_datastore_index.log file shrinks.
mnuhn
mnuhn at ebi.ac.uk
Mon Mar 25 06:18:11 MDT 2013
Thanks, this works and mpi maker is running now.
Cheers,
Michael.
P.S.:
If anyone is trying to reproduce this, I only had one directory in
LD_PRELOAD and it didn't like the trailing colon, so I removed it to
make it work:
export LD_PRELOAD=/software/openmpi-1.4.3/lib/libmpi.so
On 2013-03-19 15:22, Carson Holt wrote:
> I have MAKER working under OpemnMPI 1.4.3 (intel compiled).
>
> I had to set a couple of environmental variables prior to setup. You
> would
> probably need to set these values as well. If you your OpenMPI path
> was
> here for example --> /software/openmpi-1.4.3/, run the following
> commands
> (path set accordingly) before even attempting maker setup.
>
> export OMPI_MCA_mpi_warn_on_fork 0
> export LD_PRELOAD /software/openmpi-1.4.3/lib/libmpi.so:$LD_PRELOAD
>
> These not only need to be set before compilation, but also before any
> run
> (so add them to you ~.bashrc or ~/.bash_profile or any module load
> scripts
> thanks). The LD_PRELOAD statement needs to be set for any program
> using
> OpenMPI's shared libraries and not just MAKER, so it's normally a
> good
> idea to have that set system wide for all users. The detail can be
> found
> in the OpenMPI documentation. Note sometimes system library updates
> can
> break OpenMPI's shared libraries while not breaking OpenMPI itself,
> so you
> might also need to recompile OpenMPI if it has broken shared
> libraries.
>
> Once you have those commands in place, run the perl Buil.PL step. Say
> yes
> to install with MPI. Then run ./Build install
>
> Thanks,
> Carson
>
>
>
> On 13-03-19 11:02 AM, "Carson Holt" <carsonhh at gmail.com> wrote:
>
>>Try it with the no_locks option then. Make sure to let one instance
>>finish populating the mpi_blastdb directory before running other
>>instances
>>as that is where most initial locking occurs.
>>
>>I'll send you more details on how to install with OpenMPI, so you can
>>give
>>that a shot while your jobs are also running serially (so you don't
>> lose
>>time). Also instead of 50 serial instances, you could try 10 with
>> -cpus
>>set to 5.
>>
>>Thanks,
>>Carson
>>
>>
>>
>>On 13-03-19 11:19 AM, "Michael Nuhn" <mnuhn at ebi.ac.uk> wrote:
>>
>>>Hello Carson!
>>>
>>>On 03/19/2013 02:27 PM, Carson Holt wrote:
>>>> Yes. If at all possible use MPI. It removes the overhead of
>>>> locks
>>>> which happen per primary instance of MAKER. So one maker job
>>>> using
>>>>1000
>>>> cpus via MPI will have one shared set of locks. 1000 serial
>>>> instances
>>>> of MAKER on the other hand would have 1000x the locks.
>>>
>>>I don't know a thing about MPI.
>>>
>>>I tried installing maker (2.2.7) with mpich-3.0.2, mpich2-1.4.1 and
>>> open
>>>mpi and none of them worked for me. I also tried the automatic
>>>installation that comes with maker, but it didn't work for me
>>> either.
>>>
>>>If need be, I could spend time getting to the bottom of this, but
>>> there
>>>is no telling how long this would take me so I'd rather not, if
>>> there is
>>>an alternative.
>>>
>>>Would the approach I outlined before work? (Treating the split files
>>> as
>>>separate genomes to annotate and then combine the gffs afterwards)
>>>
>>>I also like this approach, because I would select a few contigs in
>>> the
>>>beginning which I would run on their own. They would complete early
>>> and
>>>this way I would get a preview of the results of the run instead of
>>>having to wait for everything to complete.
>>>
>>>It might also be more robust, because file locking issues would be
>>>confined to the instances working on a sequence chunk, but the rest
>>> of
>>>the instances could continue working.
>>>
>>>Cheers,
>>>Michael.
>>>
>>>> Alternatively if you do need to continue without MPI for some
>>>> reason, I
>>>> just finished a devel version of MAKER that has a --no_locks
>>>> option.
>>>> You can never start two instances using the same input fasta
>>>> when
>>>> --no_locks is specified, but the splitting to use different input
>>>>fastas
>>>> I mentioned before in the example will still work fine.
>>>>
>>>> I also have updated the indexing/reindexing, so if indexing
>>>> failures
>>>> happen, MAKER will switch between the current working directory
>>>> and the
>>>> TMP= directory from the maker_opts.ctl file so as to try different
>>>> IO
>>>> locations (I.e. NFS and non-NFS). Note you should never set TMP=
>>>> in
>>>>the
>>>> control files to an NFS mounted location (it not only makes things
>>>> a
>>>>lot
>>>> slower, but berkleydb and sqllite will get frequent errors on
>>>> NFS).
>>>> TMP= defaults to /tmp when not specified
>>>>
>>>> I'll send you download information in a separate e-mail. Try a
>>>> regular
>>>> MAKER run to see if the indexing/reindexing changes are sufficient
>>>> before attempting the ‹no_locks option.
>>>>
>>>> Thanks,
>>>> Carson
More information about the maker-devel
mailing list