Dave Larson
Dave Larson
Yes, an index for your BAM file is required. I believe if you ran Lumpyexpress with -g then SVTyper is run for you and you shouldn't need to run it...
The error seems to indicate that svtyper is having trouble finding enough paired-end reads to build an insert size distribution. Since you're obviously aligning paired reads, I'd check that the...
My experience is generally on large cohorts as opposed to single samples. Perhaps @brentp would have some advice on what the Quinlan lab is doing for individuals or small cohorts....
No trouble on the delay. If I limit the memory to ~12 GB and a single core, one of the subjobs running GenerateSVCandidates crashes. When running the same sample through...
Changing the input to a BAM instead of a CRAM dropped the memory usage precipitously (to the point I almost don't believe it) to 1.1GB.
That looks promising. I'd be happy to test it if you're able to point me to a branch.
Compiled and ran this branch. The samtools in libexec does report as v1.9. Memory usage was ~72GB as reported by LSF. Seems like this didn't change the memory usage.
Sorry about the delayed response @ctsa This CRAM was created with a pipeline conforming to the guidelines here: https://github.com/CCDG/Pipeline-Standardization/blob/master/PipelineStandard.md See also the pre-print here: https://www.biorxiv.org/content/early/2018/04/10/269316 This is not my pipeline,...
I think I may have discovered the issue here. The CRAM(s) in question appear to be missing the `@HD` line of the header. At least in one case, adding in...
@ctsa - I'm going to resolve this since I'm fairly certain that the missing header line is the source of my issues and since converting to BAM or adding the...