adam icon indicating copy to clipboard operation
adam copied to clipboard

Missing Fastq reads

Open SidWeng opened this issue 1 year ago • 1 comments

adam-core version: 0.33.0 Spark version: 3.3.0 Scala version: 2.12

I read FASTQ BGZ file with following code :

spark.sparkContext.newAPIHadoopFile(url, classOf[SingleFastqInputFormat], classOf[Void], classOf[Text], conf)

It works fine if the file is about 70 GB. However when file size is about 170 GB, some reads are missing (the missing reads are well-formed). And the missing reads can be found if read the file line by line

spark.sparkContext.newAPIHadoopFile(url, classOf[TextInputFormat], classOf[Void], classOf[Text], conf)

Is there any limitation about SingleFastqInputFormat or any advice can help me debug this issue ?

SidWeng avatar Mar 24 '23 12:03 SidWeng

Hello @SidWeng!

I have seen issues occasionally with gzipped/bgzf FASTQ input before, although typically with paired reads, where ADAM complains about not having the same numbers of each. If you know of any publicly available datasets that demonstrate this issue, I can dig into it deeper.

As a workaround, you may be able to convert to unaligned BAM format first and then read into ADAM.

Another workaround would be to convert FASTQ into CSV or tab-delimited format and then use Spark SQL to read the text file and convert into ADAM format, something like

import org.bdgenomics.adam.ds.ADAMContext._

val sql = """
SELECT
  _c0 AS name,
  CAST(NULL AS STRING) AS description,
  'DNA' AS alphabet,
  upper(_c1) AS sequence,
  length(_c1) AS length,
  _c2 AS qualityScores,
  CAST(NULL AS STRING) AS sampleId,
  CAST(NULL AS MAP<STRING,STRING>) AS attributes
FROM
  reads
"""

val df = spark.read.option("delimiter", "\t").csv(inputPath)
df.createOrReplaceTempView("reads")
val readsDf = spark.sql(sql)
val reads = sc.loadReads(readsDf)
reads.saveAsParquet(outputPath)

heuermh avatar Apr 03 '23 15:04 heuermh