datasets icon indicating copy to clipboard operation
datasets copied to clipboard

Truncated FNAs on non-dehydrated downloads of large datasets, please make it more clear that non-dehydrated downloads are dangerous

Open joeweaver opened this issue 1 year ago • 1 comments
trafficstars

When using datasets to download a large collection of genomes, many of the associated .fna files are silently truncated. I was fortunate in that: 1) a few of the files were truncated so that the final line was a fasta header 2) one of the tools in my pipeline complained about this. Otherwise, the pipeline would have finished with no obvious failures.

I'm using datasets 16.3.0 on Ubuntu 20.04. Archives are extracted with unzip 6.0. My network connection is fine and I have plenty of disk space.

Running datasets download genome taxon 473814 --include genome,seq-report reliably reproduces the issue, albeit with various numbers of truncations. I get about 80-100 truncated fna files out of the 1790 available.

Using the dehydrate/rehydrate workflow so far has avoided the issue.

Dehydrate is clearly signposted in the documentation, but it reads as an 'optional, if you have issues with large downloads' style of approach. Further, the issues that come time mind are things like 'failed downloads, disk/network capacity' (e.g. stuff that would be immediately obvious) and not 'seemingly correct downloads that may silently produce erroneous results'.

Right now the 'obvious, simple' first approach myself and probably many other users try when testing out datasets is a footgun, moreover it's a footgun where they may not even know they've been shot.

If I could suggest a soft fix, the documentation should make it very clear that if you download without dehydration any large (where large is probably smaller than you think) set of genomes, you will get truncated FNAs that may very well make it through your pipeline. I'd also suggest that running the command without --dehydrate for anything above some conservative number of genomes should at the very least strongly warn the user.

I know from other issues (#302) that adding MD5 checksums to part of the process is being considered. I strongly support that notion.

In the meantime, I've written a python tool that will check each FNA against the expected total sequence length and number of sequences listed by the assembly data report (which appears to contain valid data on all tests). This works for both direct downloaded and rehydrated datasets.

I see that datasets 16.8.1 is the most recent version and I am unsure if the bug persists (though it feels like it's at the server end). I would update and try to reproduce, but my machine is currently unavailable for that.

FWIW, I think datasets is otherwise a wonderful advance and a nice quality of life improvement.

joeweaver avatar Mar 14 '24 11:03 joeweaver

Hi joeweaver,

Thanks for the suggestions and sorry about the problem with truncated files. I apologize for any reliability concerns this has created. We have plans in the coming weeks to investigate and address this problem, including the addition of MD5 checksums to ensure data integrity. We will keep this issue open and provide updates until it is resolved.

Nuala

Nuala A. O'Leary, PhD Product Owner, NCBI Datasets National Center for Biotechnology Information, NLM, NIH, DHHS

olearyna avatar Mar 14 '24 15:03 olearyna

Hi @joeweaver,

Thanks again for opening this issue and for your feedback. We have made some updates to the datasets command-line tool that will help ensure data integrity.

For non-dehydrated downloads, if the download is incomplete, datasets will now report that the zip file is invalid, which indicates that you should retry the download.

When using rehydration, you can verify the integrity of your download using MD5 checksums that are now included in the md5sum.txt file in the data package.

For more details, see our documentation on file validation.

Best, Eric

ericcox1 avatar Apr 03 '25 12:04 ericcox1