newspaper icon indicating copy to clipboard operation
newspaper copied to clipboard

Add parallellization and caching to image download

Open dhgelling opened this issue 3 years ago • 0 comments

In my usage, a bit speed bottleneck is the sequential downloading of images from an article when finding the top image. While the current implementation attempts to only download partial images if possible, a new session is used for each image, and images are not downloaded in parallel, though each download can take up to a second, depending on the website that's being looked at. What's more, when scraping multiple articles from the same website, the same images are downloaded multiple times, and the top image that's chosen is requested twice - once when searching for the top image, and once when checking requirements.

This PR attempts to fix it by starting multiple image downloads in parallel, as well as caching downloaded images for up to 5 hours. This way, the time taken to scrape one article can be reduced from >30 seconds to 2-3 seconds, and scraping multiple articles from the same site will download fewer images.

The downside is that streaming downloading does not work with the parallel implementation. The streaming didn't improve the download time by much however, so the main downside here is the amount of data transferred.

dhgelling avatar Apr 30 '21 12:04 dhgelling