-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
exonerate() raised: [Errno 24] Too many open files #110
Comments
Hi Alicia, I haven't seen this particular issue with HybPiper before. It suggests that the number of log files being opened by the Exonerate stage of the pipeline is greater than the number allowed by your operating system. However, unless the limit on you system is very small, and you're running HybPiper with hundreds of cpus/threads, this shouldn't be the case. Can you tell me:
Also, can you send the me HybPiper log file for this sample, found in the sample directory? Cheers, Chris |
Hi Chris,
Thank you for your reply.
1. My system is Linux. VERSION="7 (Core)" and Distributor ID: CentOS.
2. The output of the command ulimit -a is:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 515138
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 131072
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Please find attached the log file.
Thank you,
Cheers
Alicia
El mié, 8 feb 2023 a las 18:48, Chris Jackson ***@***.***>)
escribió:
… Hi Alicia,
I haven't seen this particular issue with HybPiper before. It suggests
that the number of log files being opened by the Exonerate stage of the
pipeline is greater than the number allowed by your operating system.
However, unless the limit on you system is very small, and you're running
HybPiper with hundreds of cpus/threads, this shouldn't be the case.
Can you tell me:
1. What your operating system is (macOS and version, or Linux
distribution and version?)
2. What the output of the command ulimit -a is?
Also, can you send the me HybPiper log file for this sample, found in the
sample directory?
Cheers,
Chris
—
Reply to this email directly, view it on GitHub
<#110 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC7ZTJI4EOWK2SYL6M4LZNLWWQWFDANCNFSM6AAAAAAUUOFKXM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
*Alicia Talavera Júdez, *Margarita Salas Postdoctoral Fellow
Smithsonian National Museum of Natural History
University of Malaga
***@***.***
|
Hi Alicia, I can't see your log file attached - could you re-upload it, please? Your open file limit (131072) looks fine, to me, as far as HybPiper goes. To confirm, you're running this on a local machine (not a remote cluster), and the Is the problem repeatable? That is, if you run the same sample with the same targetfile and settings, do you get the same error? Also, do you mean that HybPiper ran fine for different samples when using the same target file? Are you able to upload the target file here? I'll load up a CentOS 7 VM and see if I can replicate the problem. Cheers, Chris |
Hi Chris,
Thank you again!
I have attached the log file.
I am running this on a High-Performance Cluster (SI/HPC), and yes, I had
the same problem three times.
I ran HybPiper fine for different samples using the same target file.
Best wishes
Thank you
El mié, 15 feb 2023 a las 0:01, Chris Jackson ***@***.***>)
escribió:
… Hi Alicia,
I can't see your log file attached - could you re-upload it, please?
Your open file limit (131072) looks fine, to me, as far as HybPiper goes.
To confirm, you're running this on a local machine (not a remote cluster),
and the ulimit -a output is from that local machine?
Is the problem repeatable? That is, if you run the same sample with the
same targetfile and settings, do you get the same error?
Also, do you mean that HybPiper ran fine for different samples when using
the same target file? Are you able to upload the target file here? I'll
load up a CentOS 7 VM and see if I can replicate the problem.
Cheers,
Chris
—
Reply to this email directly, view it on GitHub
<#110 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC7ZTJMTEUN23JBM4IB3F2LWXRPJLANCNFSM6AAAAAAUUOFKXM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
*Alicia Talavera Júdez, *Margarita Salas Postdoctoral Fellow
Smithsonian National Museum of Natural History
University of Malaga
***@***.***
|
Hi Alicia, I just noticed that you're replying to this issue thread via email. Attachments you send via email replies won't appear here. Can you attach both:
...via this issue thread on the git repository webpage, directly? Ah, you're using a HPC. In that case, we need to check output of Cheers, Chris |
Hi Chris! Yes, I was replying via email, sorry. The output of ulimit -a is: core file size (blocks, -c) 0 Please find attached the files. Thank you! AT255_hybpiper_assemble_2023-02-05-12_15_21.log |
Hi Alicia, Thanks for those files. Can you confirm that the Regardless, I see from your log file that you ran HybPiper with a single thread/cpu (using When you've re-run this sample, is it always the same genes that cause this error? The gene list can be seen at the end of the log file:
Also, can you please send me the Lastly, I had a look at your target file, which contains 1,086 sequences for 1,013 unique genes. It looks like the sequences are all transcript sequences (i.e., with 5' and 3' UTR), rather than CDS sequences that can be correctly translated from the first reading frame to produce the corresponding protein sequence. I recommend reading this part of the wiki to see how this can reduce your locus recovery. Cheers, Chris |
Hi Chris, Thank you again for your reply, Sorry, I misunderstand it. The ulimit -a output that I posted is from my HPC login node. When I re-run this sample or another one from the same batch always the same genes cause the error. Attached are the contigs.fasta and target.fasta files. Thank you for your time and help! Best wishes Alicia LOC100263869_contigs.fasta.txt
|
Thanks! I should've also asked for the |
Sure thing! Thank you! I will check your recommendation about the target file! LOC100264328_interleaved.fasta.txt |
Hi Alicia, I've run just the Exonerate contigs step of the pipeline on the Given that, I'm still not sure what could be causing the error in your case. Can you please:
Thanks, Chris |
Hello,
I ran "hybpiper assemble" (HybPiper v. 2.0.1) command with my last batch of samples and I get the following error for several genes:
For gene LOC100267692 exonerate() raised: [Errno 24] Too many open files:
T399/LOC100267692/LOC100267692_2023-02-07-01_08_33.log'
error.traceback is: Traceback (most recent call last):
File "/bioinformatics/hybpiper/2.0.1/lib/python3.9/site-packages/pebble/common.py", line 174, in process_execute
return function(*args, **kwargs)
File "/bioinformatics/hybpiper/2.0.1/lib/python3.9/site-packages/hybpiper/assemble.py", line 868, in exonerate
worker_configurer_func(gene_name) # set up process-specific logging to file
File "/bioinformatics/hybpiper/2.0.1/lib/python3.9/site-packages/hybpiper/utils.py", line 358, in worker_configurer
File "/bioinformatics/hybpiper/2.0.1/lib/python3.9/logging/init.py", line 1146, in init
File "/bioinformatics/hybpiper/2.0.1/lib/python3.9/logging/init.py", line 1175, in _open
OSError: [Errno 24] Too many open files: '/T399/LOC100267692/LOC100267692_2023-02-07-01_08_33.log'
I couldn´t find any information related to this problem and Hybpiper has worked fine with other samples.
Could you please help me out with this?
Thank you
Best wishes
Alicia
The text was updated successfully, but these errors were encountered: