NYX Executor (GSoC '22)
AFL++ is in C, so it needs wrappers. Since we can call directly into rust, I think it'll be cleaner/more powerful without wrappers, right?
On Mon, Jul 11, 2022, 08:03 syheliel @.***> wrote:
@.**** commented on this pull request.
In libafl_nyx/src/nyx_bridge.rs https://github.com/AFLplusplus/LibAFL/pull/693#discussion_r917568349:
}
-pub fn nyx_get_input_buffer(mut nyx_process: NyxProcess) -> *mut u8 {
return nyx_process.input_buffer_mut().as_mut_ptr();+pub fn nyx_get_input_buffer(nyx_process: &NyxProcess) -> *mut u8 {
to maintain the same set of API with what's in afl++'s nyx mode, just make my life easier :)
— Reply to this email directly, view it on GitHub https://github.com/AFLplusplus/LibAFL/pull/693#discussion_r917568349, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACIWEGXJQGRYJH27JSMMNLVTO2MXANCNFSM5232HRTQ . You are receiving this because you commented.Message ID: @.***>
The next step, in this PR in the GSOC scope or in a next one, is to parse the redqueen file in the shared dir if redqueen is enabled in NyxProcess. The file must be parsed into a struct that implements this trait https://github.com/AFLplusplus/LibAFL/blob/main/libafl/src/observers/cmp.rs#L89
if it is ready for you, we can merge this? @domenukk @andreafioraldi
Hi, there! First of all, great work! I'm glad to see that Nyx mode is finally available in libAFL :-)
I've just tested the patches, and it seems like something is off with the instrumentation mode. For some reason, libAFL is not getting any coverage feedback. Intel PT mode seems to work just fine, though.
I'm still investigating and will get back to you as soon as I know more.
Some other nits: Could we remove the -C curl argument in setup_libxml2.sh? That option is not available on my ancient curl build that is shipped with Ubuntu 21.04.
And reloading the KVM LKM with the vmware-backdoor option enabled is only required if you want to use Nyx without KVM-Nyx (and thus without Intel PT support). So maybe we can remove that from the setup script as well?
Okay, I found the cause: So, currently, the Nyx fuzzing harness (in the packer repo) doesn't support SanitizerCoverage. But I'm pretty sure that not much work is needed to add support for this coverage instrumentation. It will look something like this code (but it has to be part of the Nyx runtime that is executed inside the guest):
https://github.com/AFLplusplus/LibAFL/blob/main/libafl_targets/src/sancov_pcguard.rs
I can provide a patch for that by next week (if that's okay for you). But in the meantime, you can also use afl-clang-fast or any other AFL++ compiler to get a target running with libAFL in Nyx mode.
Some other nits: Could we remove the -C curl argument in setup_libxml2.sh? That option is not available on my ancient curl build that is shipped with Ubuntu 21.04.
@schumilo thanks for your feedback! I have fixed the -C parameter
let's merge this? btw could you directly push to a new branch in this repo, not to your own repo next time? @syheliel that way, it's easier to test & edit your code for us.
perhaps set -e at the top of the libxml2 bootstrap shell script (setup_libxml2.sh), since each step is required for the next? for example, if libtool isn't installed, the script will fail on make but keep attempting the rest of the steps, and the original error gets lost amid the error waterfall
ideally I think it's better to do this with build.rs not with a shell script
perhaps
set -eat the top of the libxml2 bootstrap shell script (setup_libxml2.sh), since each step is required for the next? for example, iflibtoolisn't installed, the script will fail onmakebut keep attempting the rest of the steps, and the original error gets lost amid the error waterfall
Some commands are not guaranteed to run in everyone's OS(like sudo modprobe), but I can add || exit to ensure the success of essential commands.
ideally I think it's better to do this with build.rs not with a shell script
CI not support sudo modprobe kvm-intel now, so CI will fail if put it in build.rs
ideally I think it's better to do this with build.rs not with a shell script
CI not support
sudo modprobe kvm-intelnow, so CI will fail if put it inbuild.rs
Why? You could explicitly check for return codes of Command, if it fails you can println!("cargo::warning=.., etc.
I'm not personally against a shell script here, since it's linux-only anyway, but a build.rs would definitely be a clean way to do this
ideally I think it's better to do this with build.rs not with a shell script
CI not support
sudo modprobe kvm-intelnow, so CI will fail if put it inbuild.rsWhy? You could explicitly check for return codes of
Command, if it fails you canprintln!("cargo::warning=.., etc. I'm not personally against a shell script here, since it's linux-only anyway, but a build.rs would definitely be a clean way to do this
I run cargo build --release in my script to get libal_cc/libafl_cxx, so put the script in build.rs will cause infinite loop...
@syheliel does this work with the sancov_pcguard instrumentation? if not, can you change it to use afl-clang-fast instead so that it works for now? and when sergej added the patch, then we can fix it back.
@syheliel does this work with the sancov_pcguard instrumentation? if not, can you change it to use afl-clang-fast instead so that it works for now? and when sergej added the patch, then we can fix it back.
@tokatoka sure, just add export CC=afl-clang-fast and export CXX=afl-clang-fast++. Sergej says he will patch it this week, maybe ping him? @schumilo
just add export CC=afl-clang-fast and export CXX=afl-clang-fast++
yes but I think we want to download and compile afl++ in your build.rs to make it work even when there's no system-wide afl++ installation
like this: https://github.com/AFLplusplus/LibAFL/blob/main/fuzzers/forkserver_simple/build.rs#L7
I run
cargo build --releasein my script to getlibal_cc/libafl_cxx, so put the script inbuild.rswill cause infinite loop...
Calling a shell script from build.rs or writing the commands into the build.rs directly makes no difference during compilation
I run
cargo build --releasein my script to getlibal_cc/libafl_cxx, so put the script inbuild.rswill cause infinite loop...Calling a shell script from build.rs or writing the commands into the build.rs directly makes no difference during compilation
libafl_cc is one of the binaries built by cargo. It means that I can't get libafl_cc when running script in build.rs.
I run
cargo build --releasein my script to getlibal_cc/libafl_cxx, so put the script inbuild.rswill cause infinite loop...Calling a shell script from build.rs or writing the commands into the build.rs directly makes no difference during compilation
I look up the previous coverage-guided examples and find that Makefile.toml instead of build.rs is what we want.
ah, I got what you mean.
you need to run things in setup_libxml2.sh after cargo build --release because you need libafl_cc for that.
ok then probably this is better handled with Makefile.toml
so can you make this change? https://github.com/AFLplusplus/LibAFL/pull/693#issuecomment-1219561704
ah, I got what you mean. you need to run things in setup_libxml2.sh after
cargo build --releasebecause you need libafl_cc for that. ok then probably this is better handled with Makefile.tomlso can you make this change? #693 (comment)
fixed. currently I'm using apt install afl++-clang, because it's supposed to be a temporary change.
We should use the latest AFLpp from git, if anything. The one from apt is ancient...
We should use the latest AFLpp from git, if anything. The one from apt is ancient...
The afl++'s version in apt 4.01c right now, not 2.x 😂
Oh wow that's amazing, they updated 🎉