pb-metagenomics-tools icon indicating copy to clipboard operation
pb-metagenomics-tools copied to clipboard

Error: Unable to allocate 30.9 TiB for an array with shape

Open CaroleBelliardo opened this issue 5 months ago • 1 comments

Hi, I tried to run your workflow on a dataset that was maybe a bit huge: 2x100Go of reads and 29Go of contigs. The job fails with the following error message:

2024-09-13 03:37:18 bigben SemiBin[2634920] INFO Binning for short_read
2024-09-13 03:37:24 bigben SemiBin[2634920] INFO Did not detect GPU, using CPU.
2024-09-13 03:38:23 bigben SemiBin[2634920] INFO Generating training data...
2024-09-13 03:38:38 bigben SemiBin[2634920] INFO Calculating coverage for every sample.
2024-09-13 03:38:38 bigben SemiBin[2635078] DEBUG Processing `/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/2-bam/salade_I_longreads.bam`
2024-09-13 06:05:53 bigben SemiBin[2634920] INFO Processed:/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/2-bam/salade_I_longreads.bam
2024-09-13 06:05:54 bigben SemiBin[2634920] DEBUG Start generating kmer features from fasta file.
2024-09-13 09:06:14 bigben SemiBin[2634920] INFO Start binning.
2024-09-13 10:45:24 bigben SemiBin[2634920] DEBUG Calculating depth matrix.
Traceback (most recent call last):
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/bin/SemiBin", line 12, in <module>
    sys.exit(main1())
             ^^^^^^^
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/SemiBin/main.py", line 1482, in main1
    main2(args, is_semibin2=False)
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/SemiBin/main.py", line 1455, in main2
    single_easy_binning(
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/SemiBin/main.py", line 1181, in single_easy_binning
    binning(**binning_kwargs)
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/SemiBin/main.py", line 1089, in binning
    cluster(
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/SemiBin/cluster.py", line 271, in cluster
    embedding, contig_labels = run_embed_infomap(logger, model, data,
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/SemiBin/cluster.py", line 147, in run_embed_infomap
    kl = cal_kl(depth[:,2*k], depth[:, 2*k + 1])
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/SemiBin/cluster.py", line 54, in cal_kl
    res = ne.evaluate(
          ^^^^^^^^^^^^
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/numexpr/necompiler.py", line 974, in evaluate
    return re_evaluate(local_dict=local_dict, global_dict=global_dict, _frame_depth=_frame_depth)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/kwak/hub/25_cbelliardo/MetaNema_LRmg/HiFi-MAG-Pipeline_vSR/.snakemake/conda/43e6297b5052c3e5d1df94ac7f19edd3_/lib/python3.12/site-packages/numexpr/necompiler.py", line 1006, in re_evaluate
    return compiled_ex(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 30.9 TiB for an array with shape (2912965, 2912965) and data type float32

The problem is raised with the memory usage of semibin, but it is impossible to tune this parameter.
I hope you have an idea of how I can manage this issue. I appreciate your help. Carole

CaroleBelliardo avatar Sep 16 '24 11:09 CaroleBelliardo