metadata-extractor icon indicating copy to clipboard operation
metadata-extractor copied to clipboard

Better handle large memory usage from PNG decompression bombs

Open aselamal opened this issue 5 years ago • 0 comments

PNG makes use of deflate allowing potentially a lot of data packed into a small image file.

This especially can be exploited in the ancillary chunks where repeated bytes compress very well. LibPNG has an article around it here

One suggestion is to allow memory limits on processing some of the ancillary chunks like zTXt.

Maybe there is a way to limit this in the metadata extractor project?

for e.g in PngMetadataReader currently it inflates and reads all bytes to one giant buffer (via StreamUtil.readAllBytes) without first considering how big this buffer might be. This allows specially crafted images to potentially cause OOM errors on any project using the metadata extractor.

else if (chunkType.equals(PngChunkType.zTXt)) {
   ....
                    textBytes = StreamUtil.readAllBytes(new InflaterInputStream(new ByteArrayInputStream(bytes, bytes.length - bytesLeft, bytesLeft)));
                } catch(java.util.zip.ZipException zex) {
                    textBytes = null;
                    PngDirectory directory = new PngDirectory(PngChunkType.zTXt);
                    directory.addError(String.format("Exception decompressing PNG zTXt chunk with keyword \"%s\": %s", keyword, zex.getMessage()));
                    metadata.addDirectory(directory);
                }
          ....

Here is an example image below, which is only 700 KB but if the zTxt chunks are decompressed will come to around 250 MB. largeztxtchunk

aselamal avatar Nov 19 '18 06:11 aselamal