Susheel Bhanu Busi
Susheel Bhanu Busi
Hey @zachary-foster Thanks a lot for the pointers. Yeah, it did run for longer than that, and eventually finished though - so all in all, a good exercise. I did...
Hey @MrOlm, Thanks much for the quick response. Please see the answers to your question below 1. The version I'm using is: `drep==3.2.2` ``` ...::: dRep v3.2.2 :::... Matt Olm....
Thanks Matt. Is `v3.4.0` on conda or pip? Will give this a go and get back. May take a bit though before I reply again.
Hi, I was wondering something along the same lines, and whether there was a way to specify an increased number of threads to speed up the process. Thank you!
Hi, Sure, here's the command below and two of the files having the same issue are attached. [GL_R1_GL5_UP_1_C1.3.1_sub.merged.gbk.txt](https://github.com/Merck/deepbgc/files/6776264/GL_R1_GL5_UP_1_C1.3.1_sub.merged.gbk.txt) [GL_R80_GL56_UP_1_maxbin_res.049.fasta_sub.merged.gbk.txt](https://github.com/Merck/deepbgc/files/6776265/GL_R80_GL56_UP_1_maxbin_res.049.fasta_sub.merged.gbk.txt) I added the `.txt` extension to get past the upload file...
@milot-mirdita I'm running into `segmentation fault` issues when running with 2 Nodes carrying 128 cpus each - total memory == 448G. Further review revealed that i'm running out of memory....
Okay, will give this a go and report back. Thank you!!
@milot-mirdita Below is the log file from one of the runs. Looks like it's running out of memory, before the job dies. [chunk00_clustering_stdout.log](https://github.com/soedinglab/MMseqs2/files/9611572/chunk00_clustering_stdout.log) And here is the job efficiency report...
Update: Managed to successfully run the clustering using a full `3 TB` node with 112 threads. The SLURM efficiency output is below: ``` Job ID: 2976046 Cluster: iris User/Group: sbusi/clusterusers...
I can confirm that @timregan's suggested method works!