Why is it ok to stop downloading JPEG image only after a few KB?
According to the example it's ok to download only a few KB in order to get image size/dimensions. But according to this stackoverflow answer in order to get JPEG width/height:
You have to scan through the JPEG file, parsing each segment, until you find the segment with the information in it that you want. This is described in the wikipedia article.
Am I missing something or your library has a workaround?
They're both right. The answer doesn't say that you have to parse the entire file, just until you find the segment.
Anyway, here's a full demo code that's working for me, Node 10+. You only need getStreamImageSize:
const url = require('url')
const http = require('https')
const sizeOf = require('image-size')
const imgUrl = 'https://i.imgur.com/O2HfwuZ.jpg'
const options = url.parse(imgUrl)
async function getStreamImageSize(stream) {
const chunks = []
for await (const chunk of stream) {
chunks.push(chunk)
try {
return sizeOf(Buffer.concat(chunks));
} catch (error) {/* Not ready yet */}
}
return sizeOf(Buffer.concat(chunks));
}
http.get(options, async stream => {
console.log(await getStreamImageSize(stream));
});
This is so fast that in my tests it works with the very first chunk.
I was wondering if we need to buffer the image at all... Couldn't we get the image dimensions by using its metadata? On exif, for instance... For example, I guess that OS doesn't read all images in order to get its size... I guess it does so by reading exif info
Exif is often much further into the image data than the dimensions info block; so you would end up downloading more of the image buffer than you need to