apport icon indicating copy to clipboard operation
apport copied to clipboard

zstd: Pass a maximum decompressed size for coredumps

Open g2p opened this issue 1 year ago • 9 comments

This fixes handling of systemd coredumps, which don't necessarily include a decompressed size header.

Maximum size was picked at 512M from looking at existing core dumps.

Fixes https://bugs.launchpad.net/apport/+bug/2081708

g2p avatar Sep 23 '24 10:09 g2p

Everyone contributing to this PR have now signed the CLA. Thanks!

github-actions[bot] avatar Sep 23 '24 10:09 github-actions[bot]

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 83.02%. Comparing base (9c0b904) to head (00f2d13).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #397   +/-   ##
=======================================
  Coverage   83.01%   83.02%           
=======================================
  Files          99       99           
  Lines       20329    20337    +8     
  Branches     3206     3208    +2     
=======================================
+ Hits        16876    16884    +8     
  Misses       2933     2933           
  Partials      520      520           

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Sep 23 '24 10:09 codecov[bot]

Thanks. What happens when the coredump size is bigger than 512 MB? Shouldn't we determine the size dynamically from the coredump file?

Are you willing to work on the patch (linter fix, add test case, etc) or is it a drive-by fire-and-forget patch that we should polish?

bdrung avatar Sep 23 '24 11:09 bdrung

Decompression is done in memory, so maybe it is better to keep a length limit.

I've reformatted with black and added a test case. It is kind of a drive-by otherwise, if it needs more significant reworking.

g2p avatar Sep 23 '24 14:09 g2p

Thanks. The merge request looks good. There is one question to ask before merging: What will happen when the core-dump size is bigger than COREDUMP_MAX_SIZE?

bdrung avatar Sep 24 '24 16:09 bdrung

The failing codecov/project can be ignored. That is caused by fluctuating coverage (the previous run had some failure code path covered).

bdrung avatar Sep 24 '24 16:09 bdrung

It will blow up, as it did in some cases before the patch (systemd coredump on oracular).

The problem is that callers aren't expecting errors, for example when going through __len__ and get_complete_size.

So a more complete fix could involve validation, silently truncating the core dump (easy enough with another parameter to zstandard decompress), or skipping it if it's too large.

g2p avatar Sep 24 '24 16:09 g2p

I'm not asking you to implement this, but I think the best path forward would be to either move to on-disk decompression (e.g. in /var/tmp or $XDG_CACHE ?) or just skip the crash if it's too big. While they're still valuable in some use cases, truncated dumps are useless for us.

schopin-pro avatar Nov 08 '24 11:11 schopin-pro

This is a different approach to fix only the __len__: https://github.com/canonical/apport/pull/423

bdrung avatar Dec 20 '24 12:12 bdrung