Missing Publication and Dataset Resources
Hi,
I executed python corpus.py corpus.ttl and then python download_corpus_resources.py to download the corpus but I got this output. Is this the expected output? It looks like some publications cannot be downloaded.
Number of records in the corpus: 586 Number of research publications: 480 Successfully downloaded 474 pdf files. Missing publication resources: {'012df4a72af52b038483', 'dca54974ff51a5f7f8ab', '5f48a343cb75195cd646', 'c8f9b19b39e34d98a557','988428e18884e28e037c', '42c2755ec0f983870e62'} Number of datasets: 106 Successfully downloaded 101 resource files. Missing dataset resources: {'875ffb2b04b1392cd1f2', 'fe338b5b2f3f6b0d11a4', '53ca68ba0ded95220662', '33b1ce039c67a6658644', '379ff5f518e664ba2353'}
I checked the publication with id: "012df4a72af52b038483", and it looks like the link is not broken. Here is the link I got from corpus.jsonld https://aasldpubs.onlinelibrary.wiley.com/doi/pdf/10.1002/hep.23220
@ceteri @philipskokoh Do you know why this happen?
Thanks
similar error at my side:
Number of records in the corpus: 586
Number of research publications: 480
Successfully downloaded 474 pdf files.
Missing publication resources: {'c8f9b19b39e34d98a557', '988428e18884e28e037c', 'dca54974ff51a5f7f8ab', '42c2755ec0f983870e62', '5f48a343cb75195cd646', '012df4a72af52b038483'}
Number of datasets: 106
Successfully downloaded 104 resource files.
Missing dataset resources: {'fe338b5b2f3f6b0d11a4', '33b1ce039c67a6658644'}
I think the reason is wiley insert the src link dynamically via client-side javascript.
But the code requests.get(uri) just get the html, without the javascript excuted, that is why soup.find('embed')['src'] get Nothing.
@HaritzPuerto and @tong-zeng: It seems that onlinelibrary.wiley.com changes the format of their html files. Let me look at the new update. For dataset resources, it's true that the script can't download some dataset resources.
Yes. @tong-zeng is correct. Now, they inject the src link using javascript instead of inside <embed> tag.
@HaritzPuerto , @tong-zeng : I find that onlinelibrary.wiley.com uses doi/pdfdirect/... to send the pdf file. Hopefully the link is static across different client. Could you try my patched code on the forked repo:
https://github.com/philipskokoh/rclc
For 'dca54974ff51a5f7f8ab',
the open access comes from www.sciencedirect.com, and it seems that my way of downloading it gets rejected by sciencedirect. I am afraid we need to manually download this particular resources.
If you have any better idea to collect this open access, feel free to suggest to me.
@philipskokoh Thank you. I agree with you, for those resources difficult to download, we can just download it manually if the are not too much, otherwise, would you consider removing them from the resources list?
@philipskokoh Thank you. I agree with you, for those resources difficult to download, we can just download it manually if the are not too much, otherwise, would you consider removing them from the resources list?
The corpus is growing, new publications will be added, and I do not know what are the possible error responses while downloading them. I'll update the script accordingly when the publications added later. I prefer to try downloading all resources and report all failed downloads. It helps the user to easily navigate from that.
I guess for now we can just manually download these particular publications.It is not a big issue. But as Philips said, new publications will be added. I guess (and hope XD) in most of them it wont be a problem as right now.
Thank you all for tracking this problem with publication PDFs!
Looking at those publication URLs, the problems seem to be with both Wiley and Elsevier, for example using JavaScript (for session tokens?) on their PDF downloads. That will prevent use of libraries such as requests although we could eventually use selenium or more long-term perhaps we could use a service such as diffbot.
For now, how about this -- as new publications get added to the corpus, we can:
- avoid using those sources (Wiley, Elsevier, SSRN) for open access PDFs
- run the download script prior to each corpus version release, and include the output in the release notes
NYU is still working to get a public S3 bucket for us to use with the competition. I may just create one for now, then transfer ownership to the NYU account when they have permissions worked out. In any case, if we had the PDFs in a shareable storage bucket this would be no issue.
The dataset resources will be more difficult to resolve. We're still trying to identify consistent URLs for each dataset.
How about, if a dataset is missing a public URL, that could be considered a warning instead of an error?
The dataset resources will be more difficult to resolve. We're still trying to identify consistent URLs for each dataset.
How about, if a dataset is missing a public URL, that could be considered a warning instead of an error?
What is missing a public URL? Are we considering Wiley, Elsevier, and SSRN as non-public URLs? I can skip these domains (and print a warning message) in the download script.
Selenium and diffbot are good viable solution if we have large number of sources from wiley, Elsevier, and SSRN.
We're getting closer. This still needs work to download from specific sites more effectively. See the error log in https://github.com/Coleridge-Initiative/rclc/blob/master/errors.txt
Some of those errors will be handled by manual override in RCHuman
Will assign among our NYU-CI team:
Troubleshoot the PDF download process, based on the observed errors