ExpansionHunterDenovo
ExpansionHunterDenovo copied to clipboard
Merging requires much memory. Is there a way to split the json output ?
Hi all, Thank you for ExpansionHunterDeNovo,
I'm currently testing ExpansionHunterDeNovo on a set of ~1500 WGS case/control. Everything is fine but the merging step takes too much memory and my jobs are usually killed by the cluser-manager.
Is there a way to split the json to reduce the required memory ? splitting by pattern ? splitting by chromosome ?
Thank you for your help.
Thank you for using the program!
I suspect that dinucleotide repeats are causing the issue. So splitting the analysis by the repeat unit length might be the way to go. Could you please run this Linux binary with "--min-unit-len" set to 3?
I think discarding all dinucleotide repeats from the downstream analysis may be reasonable anyway because (a) if there are very many dinucleotide repeats, they will dominate the analysis and make it much harder to detect expansions with longer motifs and (b) the vast majority of known pathogenic repeats have motifs of size 3 and longer.
We will consider changing the default value of "--min-unit-len" to 3 in the next release.
@egor-dolzhenko thank you very much ! I won't be able to use your new version before next week.
Sounds good @lindenb! Please let me know if there are any issues with the new version.
@egor-dolzhenko
hi , Thank you for the new binary. I tested it with ~400WGS and --min-unit-len 3
. Computing the 'merge' was much faster ! I've forwarded the results to my colleague biostatistician but at first glance I don't see anymore those low p-values that looked like some false positives.
Glad to hear it @lindenb! Thank you for the update