[maker-devel] Fwd: ERROR: MPI_Recv(186), dequeue_and_set_error(596)
Yunfei Guo
guoyunfei1989 at gmail.com
Thu Jul 26 09:10:42 MDT 2012
---------- Forwarded message ----------
From: Yunfei Guo <guoyunfei1989 at gmail.com>
Date: Thu, Jul 26, 2012 at 8:10 AM
Subject: Re: [maker-devel] ERROR: MPI_Recv(186), dequeue_and_set_error(596)
To: Carson Holt <carsonhh at gmail.com>
Hi Carson, same error occurred again. What should I do to check if it was
caused by the same node? Also, if I ran maker on a single node instead of
two nodes, will the same error appear again? Thank you.
#-------------------------------#
SIGCHLD handler "DEFAULT" not defined.
Fatal error in MPI_Recv: Other MPI error, error stack:
MPI_Recv(186).............: MPI_Recv(buf=0x7fff1c3dd3b0, count=2, MPI_
INT, src=MPI_ANY_SOURCE, tag=1111, MPI_COMM_WORLD, status=0x7fff1c3dd3
90) failed
dequeue_and_set_error(596): Communication error with rank 21
running exonerate search.
#--------- command -------------#
Widget::exonerate::protein2genome:
/home/username/usr/bin/exonerate -q /home/yunfeiguo/projects/fish/Nigro
/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datast
ore/81/43/scaffold5780//theVoid.scaffold5780/tr%7CG3N4L5%7CG3N4L5_GASA
C.for.6527-8832.2.fasta -t /home/yunfeiguo/projects/fish/Nigro/run/dir
_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/81/4
3/scaffold5780//theVoid.scaffold5780/scaffold5780.6527-8832.2.fasta -Q
protein -T dna -m protein2genome --softmasktarget --percent 20 --sho
wcigar > /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigr
o-53k_part.maker.output/Nigro-53k_part_datastore/81/43/scaffold5780//t
heVoid.scaffold5780/scaffold5780.6527-8832.tr%7CG3N4L5%7CG3N4L5_GASAC.
p_exonerate.2
#-------------------------------#
Perl exited with active threads:
1 running and unjoined
0 finished and unjoined
0 running and detached
...
Yunfei
On Wed, Jul 25, 2012 at 1:43 PM, Yunfei Guo <guoyunfei1989 at gmail.com> wrote:
> Thanks, Carson. Actually I already set clean_try=1.
>
>
> On Wed, Jul 25, 2012 at 1:34 PM, Carson Holt <carsonhh at gmail.com> wrote:
>
>> That second error from 2.25, Seems to be thrown by Perl's Storable
>> module. There may be some weird partial serialization of the data that
>> occurred on the first failure and is now causing failures on retry. You
>> can set clean_try=1 in MAKER, to let it wipe out data for a failed contig
>> before retrying. That can sometimes help get around weird hard failures.
>>
>> Thanks,
>> Carson
>>
>>
>> From: Yunfei Guo <guoyunfei1989 at gmail.com>
>> Date: Wednesday, 25 July, 2012 4:26 PM
>> To: Carson Holt <carsonhh at gmail.com>
>> Subject: Re: [maker-devel] ERROR: MPI_Recv(186),
>> dequeue_and_set_error(596)
>>
>> Thank you, Carson. I'm rerunning maker2.26 now. I just tried maker2.25 as
>> well, it failed this time, with similar errors below. I guess it might be
>> caused by the cluster (or node) itself, like you said, because we just
>> added a few nodes and more memories. I'll ask the admin to see whether he
>> can explain this.
>> #-------------------------------#
>> Thread 1 terminated abnormally:
>> ------------- EXCEPTION: Bio::Root::Exception -------------
>> MSG: no data for midline Sequence with id BL_ORD_ID:126195 no longer
>> exists in database...alignment skipped
>> STACK: Error::throw
>> STACK: Bio::Root::Root::throw
>> /home/yunfeiguo/perl5/lib/perl5/Bio/Root/Root.pm:472
>> STACK: Bio::SearchIO::blast::next_result
>> /home/yunfeiguo/perl5/lib/perl5/Bio/SearchIO/blast.pm:1888
>> STACK: Widget::tblastx::keepers
>> /home/yunfeiguo/Downloads/maker/bin/../lib/Widget/tblastx.pm:114
>> STACK: Widget::tblastx::parse
>> /home/yunfeiguo/Downloads/maker/bin/../lib/Widget/tblastx.pm:95
>> STACK: GI::tblastx_as_chunks
>> /home/yunfeiguo/Downloads/maker/bin/../lib/GI.pm:2612
>> STACK: Process::MpiChunk::_go
>> /home/yunfeiguo/Downloads/maker/bin/../lib/Process/MpiChunk.pm:1829
>> STACK: Process::MpiChunk::run
>> /home/yunfeiguo/Downloads/maker/bin/../lib/Process/MpiChunk.pm:331
>> STACK: main::node_thread /home/yunfeiguo/Downloads/maker/bin/maker:1308
>> STACK: threads::new
>> /home/yunfeiguo/perl5/lib/perl5/x86_64-linux-thread-multi/forks.pm:799
>> STACK: /home/yunfeiguo/Downloads/maker/bin/maker:804
>> -----------------------------------------------------------
>> Cannot restore overloading on HASH(0x1b7f9b60) (package
>> Bio::Root::Exception) (even after a "require Bio::Root::Exception;") at
>> /home/yunfeiguo/perl5/lib/perl5/x86_64-linux-thread-multi/Storable.pm line
>> 416, at /home/yunfeiguo/perl5/lib/perl5/x86_64-linux-thread-multi/
>> forks.pm line 2256.
>> Compilation failed in require at
>> /home/yunfeiguo/Downloads/maker/bin/maker line 11.
>> BEGIN failed--compilation aborted at
>> /home/yunfeiguo/Downloads/maker/bin/maker line 11.
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> deleted:0 hits
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fff05c78630, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fff05c78610) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fff1414fb00, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fff1414fae0) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fff25d86c00, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fff25d86be0) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> cleaning tblastx...
>> cleaning clusters....
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fff1b71f1f0, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fff1b71f1d0) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fffc99d29c0, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fffc99d29a0) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fffc4aaf720, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fffc4aaf700) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> in cluster::shadow_cluster...
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fff317862a0, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fff31786280) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> ...finished clustering.
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fff8abb8e50, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fff8abb8e30) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fff1d1ff180, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fff1d1ff160) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fff4d865850, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fff4d865830) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fffbec98150, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fffbec98130) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Fatal error in MPI_Recv: Other MPI error, error stack:
>> MPI_Recv(186).............: MPI_Recv(buf=0x7fffa4ead990, count=2,
>> MPI_INT, src=0, tag=5555, MPI_COMM_WORLD, status=0x7fffa4ead970) failed
>> dequeue_and_set_error(596): Communication error with rank 0
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>> Perl exited with active threads:
>> 1 running and unjoined
>> 0 finished and unjoined
>> 0 running and detached
>>
>> On Wed, Jul 25, 2012 at 12:46 PM, Carson Holt <carsonhh at gmail.com> wrote:
>>
>>> MPI is notorious for unexplicable communication errors, so first I would
>>> suggest just restarting and seeing if it happens again (MAKER will pick up
>>> where it left off on restart, so no need to alter settings or files).
>>>
>>> If it happens again, we can look into it, but no component of the MPI
>>> communication framework changed between 2.25 and 2.26 (100% identical), so
>>> my first instinct is that this was just what the message said,
>>> a"Communication error with rank 18". If it happens again I can try and add
>>> some extra messages so we can see the hostname of rank 18. That way we can
>>> identify if it's constantly a specific node on your cluster.
>>>
>>> Let me know if you see it again.
>>>
>>> Thanks,
>>> Carson
>>>
>>>
>>>
>>> From: Yunfei Guo <guoyunfei1989 at gmail.com>
>>> Date: Wednesday, 25 July, 2012 3:15 PM
>>> To: <maker-devel at yandell-lab.org>
>>> Subject: [maker-devel] ERROR: MPI_Recv(186), dequeue_and_set_error(596)
>>>
>>> Hi everyone,
>>>
>>> I ran maker2.25 without a problem, but with maker2.26, I encountered the
>>> following error after running it for ~8 hr with 2 nodes and 24 cpus, do you
>>> have any idea what's going on here? Some contigs did get finished, maybe
>>> this is not a big problem. My mpich2 version 1.4.1p1, job scheduling system
>>> is SGE. Thanks!
>>>
>>> running blast search.
>>> #--------- command -------------#
>>> Widget::blastx:
>>> /home/yunfeiguo/Downloads/maker/bin/../exe/blast/bin/blastx -db
>>> /tmp/6480.1.all.q/maker_PQOTIq/concatPro%2Etxt.mpi.10.4 -query
>>> /tmp/6480.1.all.q/maker_PQOTIq/rank3/scaffold2602.0 -num_alignments 10000
>>> -num_descriptions 10000 -evalue 1e-06 -dbsize 300 -searchsp 500000000
>>> -num_threads 1 -seg yes -soft_masking true -lcase_masking -show_gis -out
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/7A/37/scaffold2602//theVoid.scaffold2602/scaffold2602.0.concatPro%2Etxt.blastx.temp_dir/concatPro%2Etxt.mpi.10.4.blastx
>>> #-------------------------------#
>>> deleted:-1 hits
>>> SIGCHLD handler "DEFAULT" not defined.
>>> SIGCHLD handler "DEFAULT" not defined.
>>> running exonerate search.
>>> #--------- command -------------#
>>> Widget::exonerate::protein2genome:
>>> /home/username/usr/bin/exonerate -q
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/sp%7CQ8N8A2%7CANR44_HUMAN.for.1-3712.8.fasta
>>> -t
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/scaffold2590.1-3712.8.fasta
>>> -Q protein -T dna -m protein2genome --softmasktarget --percent 20
>>> --showcigar >
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/scaffold2590.1-3712.sp%7CQ8N8A2%7CANR44_HUMAN.p_exonerate.8
>>> #-------------------------------#
>>> Fatal error in MPI_Recv: Other MPI error, error stack:
>>> MPI_Recv(186).............: MPI_Recv(buf=0x7fffa3a2e760, count=2,
>>> MPI_INT, src=MPI_ANY_SOURCE, tag=1111, MPI_COMM_WORLD,
>>> status=0x7fffa3a2e740) failed
>>> dequeue_and_set_error(596): Communication error with rank 18
>>> running blast search.
>>> #--------- command -------------#
>>> Widget::blastx:
>>> /home/yunfeiguo/Downloads/maker/bin/../exe/blast/bin/blastx -db
>>> /tmp/6480.1.all.q/maker_PQOTIq/concatPro%2Etxt.mpi.10.8 -query
>>> /tmp/6480.1.all.q/maker_PQOTIq/rank11/scaffold2575.0 -num_alignments 10000
>>> -num_descriptions 10000 -evalue 1e-06 -dbsize 300 -searchsp 500000000
>>> -num_threads 1 -seg yes -soft_masking true -lcase_masking -show_gis -out
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F0/AE/scaffold2575//theVoid.scaffold2575/scaffold2575.0.concatPro%2Etxt.blastx.temp_dir/concatPro%2Etxt.mpi.10.8.blastx
>>> #-------------------------------#
>>> running blast search.
>>> #--------- command -------------#
>>> Widget::tblastx:
>>> /home/yunfeiguo/Downloads/maker/bin/../exe/blast/bin/tblastx -db
>>> /tmp/6480.1.all.q/maker_PQOTIq/AllSebESTs_plus_Rubri%2Efasta.mpi.10.1
>>> -query /tmp/6480.1.all.q/maker_PQOTIq/rank7/scaffold2620.0 -num_alignments
>>> 10000 -num_descriptions 10000 -evalue 1e-10 -dbsize 1000 -searchsp
>>> 500000000 -num_threads 1 -lcase_masking -seg yes -soft_masking true
>>> -show_gis -out
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/6B/FB/scaffold2620//theVoid.scaffold2620/scaffold2620.0.AllSebESTs_plus_Rubri%2Efasta.tblastx.temp_dir/AllSebESTs_plus_Rubri%2Efasta.mpi.10.1.tblastx
>>> #-------------------------------#
>>> running exonerate search.
>>> #--------- command -------------#
>>> Widget::exonerate::protein2genome:
>>> /home/username/usr/bin/exonerate -q
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/sp%7CQ8NB46%7CANR52_HUMAN.for.1-3712.8.fasta
>>> -t
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/scaffold2590.1-3712.8.fasta
>>> -Q protein -T dna -m protein2genome --softmasktarget --percent 20
>>> --showcigar >
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/scaffold2590.1-3712.sp%7CQ8NB46%7CANR52_HUMAN.p_exonerate.8
>>> #-------------------------------#
>>> cleaning blastx...
>>> in cluster::shadow_cluster...
>>> ...finished clustering.
>>> cleaning clusters....
>>> total clusters:1 now processing 0
>>> ...processing 0 of 2
>>> deleted:0 hits
>>> ...processing 1 of 2
>>> running blast search.
>>> #--------- command -------------#
>>> Widget::tblastx:
>>> /home/yunfeiguo/Downloads/maker/bin/../exe/blast/bin/tblastx -db
>>> /tmp/6480.1.all.q/maker_PQOTIq/AllSebESTs_plus_Rubri%2Efasta.mpi.10.6
>>> -query /tmp/6480.1.all.q/maker_PQOTIq/rank9/scaffold2615.0 -num_alignments
>>> 10000 -num_descriptions 10000 -evalue 1e-10 -dbsize 1000 -searchsp
>>> 500000000 -num_threads 1 -lcase_masking -seg yes -soft_masking true
>>> -show_gis -out
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/E2/6E/scaffold2615//theVoid.scaffold2615/scaffold2615.0.AllSebESTs_plus_Rubri%2Efasta.tblastx.temp_dir/AllSebESTs_plus_Rubri%2Efasta.mpi.10.6.tblastx
>>> #-------------------------------#
>>> deleted:0 hits
>>> running exonerate search.
>>> #--------- command -------------#
>>> Widget::exonerate::protein2genome:
>>> /home/username/usr/bin/exonerate -q
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/tr%7CE7F7S0%7CE7F7S0_DANRE.for.1-3712.9.fasta
>>> -t
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/scaffold2590.1-3712.9.fasta
>>> -Q protein -T dna -m protein2genome --softmasktarget --percent 20
>>> --showcigar >
>>> /home/yunfeiguo/projects/fish/Nigro/run/dir_Nigro-53k00/Nigro-53k_part.maker.output/Nigro-53k_part_datastore/F9/9B/scaffold2590//theVoid.scaffold2590/
>>> scaffold2590.1-3712.tr%7CE7F7S0%7CE7F7S0_DANRE.p_exonerate.9
>>> #-------------------------------#
>>> deleted:0 hits
>>> cleaning blastx...
>>> cleaning clusters....
>>> total clusters:1 now processing 0
>>> cleaning clusters....
>>> total clusters:1 now processing 0
>>> deleted:-1 hits
>>> deleted:-1 hits
>>> deleted:-6 hits
>>> deleted:-3 hits
>>> deleted:-2 hits
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>> Perl exited with active threads:
>>> 1 running and unjoined
>>> 0 finished and unjoined
>>> 0 running and detached
>>>
>>> Yunfei
>>>
>>> _______________________________________________ maker-devel mailing list
>>> maker-devel at box290.bluehost.com
>>> http://box290.bluehost.com/mailman/listinfo/maker-devel_yandell-lab.org
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://yandell-lab.org/pipermail/maker-devel_yandell-lab.org/attachments/20120726/3aed803e/attachment-0002.html>
More information about the maker-devel
mailing list