ccc-linux-guest-hardening
ccc-linux-guest-hardening copied to clipboard
Campaign workflow discussion
Meta-issue to track some open issues / discussion points from campaign automation PR
-
[ ]
eu-addr2linesometimes returns non-0 exit code....I just muted it, but maybe not all the output is there.. -
[ ] since smatcher is doing the smatch line matching, and can also consume input from the ghidra path, can we remove that smatch match feature from fast_matcher and rename the tool to something more obvious, like edges2lines?
-
[ ] global resources created by
make prepareshould have separate location, like $ASSET_ROOT=$BKC_ROOT/assets/ -
[ ] smatch audit should be done for current selected config and kernel tree. It would be good to have a check for this at least, or even better output directly to ASSET_ROOT/<version/config-hash>/. OTOH smatch audit always takes a couple minutes and is something we may often want to skip....e.g.
pipeline.py --skip-smatch? -
[ ] all the fuzz.sh jobs can be turned into single parametrized task abstraction?
-
[ ] not using much of parsl in the end, and the cloud/cluster abstractions are probably also half broken. perhaps we can switch this pipeline to multiprocess.pool() now?
-
[ ] look at campaign stats and tune harness configuration, document expected results
- [x] unclear kafl_agent_init() crash
- [x] VIRTIO_CONSOLE_INIT not working?
- [x] BPH_HANDLE_CONTROL_MESSAGE not working?
- [x] P9_VIRTFS not working?
- [x] US_RESUME_SUSPEND no regular paths?
-
[ ] smatcher / smatch match reports
- [x] final
smatcherreport should be added to pipeline.py - [ ] need to separate the output (report) from stdout/stderr
- [ ] don't abort on broken workdir, just report and keep going
- [ ] directly scan for workdirs/addr2line inputs so user can supply single campaign dir
- [ ] render a simple html page?
- [x] final
- [ ] kAFL workdirs should be in /dev/shm for better perf and then moved to storage folder. Here is another reason:
Launching kAFL with workdir /home/steffens/data/campaign-2022-10-19-test-pipeline/BPH_HANDLE_CONTROL_MESSAGE/workdir_8z9no8xn..
[...]
File "/home/steffens/ccc/kafl/fuzzer/kafl_fuzzer/manager/communicator.py", line 30, in __init__
self.listener = Listener(self.address, 'AF_UNIX', backlog=1000)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 448, in __init__
self._listener = SocketListener(address, family, backlog)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 591, in __init__
self._socket.bind(address)
OSError: AF_UNIX path too long
(Longer term, the sockets + named shm files used by kafl/qemu should be separated from the workdir..)
- [x] In pipeline.py, the task for
summarize.shgenerates thesummary.htmlbut does not produce associated decoded stack traces. The necessary jobs are written tostack_decode.lst, we just need to run them.
pipeline.py --use-fast-matcher 1 -p 16 causes 16*$(nproc) processes due to this:
https://github.com/intel/ccc-linux-guest-hardening/blob/d50c17787f5509d8933fbca7fcaaa8a656e13157/bkc/kafl/fuzz.sh#L348
Should use taskset to limit the child's visible nproc, or better run smatch directly?