Specify output file name in bulk mode
I am using dezoomify to create multiple page images of IIIF documents. I can put the info.json references for each page into a file and then dezoomify all the pages, one after another. I can also preface each output file name with a useful label. Thus, this bulk.txt file:
https://images.lib.cam.ac.uk/iiif/MS-GG-00004-00027-00001-000-00279.jp2/info.json
https://images.lib.cam.ac.uk/iiif/MS-GG-00004-00027-00001-000-00280.jp2/info.json
https://images.lib.cam.ac.uk/iiif/MS-GG-00004-00027-00001-000-00281.jp2/info.json
When used with the command
dezoomify-rs --bulk bulk.txt Gg.jpg
Creates a sequence of files named Gg_0001.jpg, Gg_0002.jpg and Gg_0003.jpg. Beautiful! However: as I have usually around 500 pages in each document, it would be very nice if I could specify a file name for each output file. These pages correspond to pages 134r, 134v and 135r fo this document. It would be very neat if the bulk text file allowed me to specify the output file name for each input url, thus:
https://images.lib.cam.ac.uk/iiif/MS-GG-00004-00027-00001-000-00279.jp2/info.json Gg-134r.jpg
https://images.lib.cam.ac.uk/iiif/MS-GG-00004-00027-00001-000-00280.jp2/info.json Gg-134v.jpg
https://images.lib.cam.ac.uk/iiif/MS-GG-00004-00027-00001-000-00281.jp2/info.json Gg-135r.jpg
This would be so helpful. I have some 29000 pages in 88 documents and having to rename every output file to correspond to the page name would br rather painful
I believe this has now been implemented, along with the ability to download all the images from a iiif manifest, and much more. Wonderful!
Thank you very much for implementing this feature natively so that us dummies don't have to figure out how to create batchs file to accomplish this task. The siblings of Dezoomify, Yt-dlp and gallery-dl, also have the ability to bulk download from text files with 1 url per line, it just takes to bit of research to discover this since they are also console applications. Gallery-dl also prefixes a "# " to the url after it has been successfully harvested, and can be configured to save all harvested urls to a file to prevent accidental re-downloading. Possibly a (customizable) delay could be added between the processing of each URL; web sites might get suspicious if you download multiple files with no delay between them.