libs icon indicating copy to clipboard operation
libs copied to clipboard

Add arguments for sendmmsg and recvmmsg

Open Molter73 opened this issue 1 year ago • 22 comments

What type of PR is this?

/kind feature

Any specific area of the project related to this PR?

/area API-version

/area build

/area CI

/area driver-kmod

/area driver-bpf

/area driver-modern-bpf

/area libscap

/area libpman

/area libsinsp

/area tests

Does this PR require a change in the driver versions?

/version driver-API-version-major

/version driver-API-version-minor

/version driver-API-version-patch

/version driver-SCHEMA-version-major

/version driver-SCHEMA-version-minor

/version driver-SCHEMA-version-patch

What this PR does / why we need it:

Add argument processing for the sendmmsg and recvmmsg syscalls.

These are quite tricky, they behave the same way sendmsg and recvmsg do, but allowing for multiple messages to be sent/received in a single syscall. This breaks some invariants on how Falco processes events, for instance, a sendmmsg call could send messages to multiple destinations in connectionless UDP, which we would need multiple socket tuples to represent in userspace. To work around this I proposed issuing multiple events from the kernel. This has lead me to do 2 implementations:

  • The kernel module and legacy eBPF probe only process the first message in the syscall. This is due to both verifier issues on the eBPF probe and the impossibility to issue multiple event headers in a single syscall.
  • The modern BPF probe processes all events by using bpf_loop if available, or does a best effort with a regular loop otherwise.

The implementation is far from perfect and I'm not super happy with it, but it's the best I've managed to come up with so far. Any suggestions for improvement are welcome.

Which issue(s) this PR fixes:

Fixes #1636

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Add arguments for sendmmsg and recvmmsg syscalls.

Molter73 avatar Aug 27 '24 16:08 Molter73

/cc @Andreagit97 @FedeDP

Molter73 avatar Aug 27 '24 16:08 Molter73

Please double check driver/SCHEMA_VERSION file. See versioning.

/hold

github-actions[bot] avatar Aug 27 '24 16:08 github-actions[bot]

Thanks Mauro! Putting this in the TBD milestone because we won't have enough time to review/improve it in the next ~10days, and then we will have to tag libs and drivers for the next Falco release. But we will make sure to come at it as soon as possible. /milestone TBD

FedeDP avatar Aug 28 '24 06:08 FedeDP

Hei, thank you! Before looking at this PR I was thinking of a tail call approach, so you start with a first bpf program that sends the first message and then you tail call until you have no more messages to send (the limit is 32 if I recall well). This approach should work well in the modern ebpf (have you already considered it?) but I see the issue that it could cause with the old drivers... we have no easy ways to create multiple headers and send multiple events inside the same filler, all the architecture is built on the assumption of sending just one event per filler... I understand that creating all these helpers just for two syscalls could be an overkill. Send just the first message/tuple could be a hypothesis for the old drivers, even if I don't like it so much. At least in the kernel module, I would like to send all the messages/tuples in some way. I will try to think if we can do something different at least for the kernel module, but I'm not sure about the outcome...

Andreagit97 avatar Aug 28 '24 08:08 Andreagit97

Thanks Mauro! Putting this in the TBD milestone because we won't have enough time to review/improve it in the next ~10days, and then we will have to tag libs and drivers for the next Falco release. But we will make sure to come at it as soon as possible. /milestone TBD

No worries, I have a bunch of CI failures to go through before this can go in anyways and I'm also not super happy about the implementation so we can discuss as long as it takes.

I was thinking of a tail call approach, so you start with a first bpf program that sends the first message and then you tail call until you have no more messages to send (the limit is 32 if I recall well). This approach should work well in the modern ebpf (have you already considered it?)

That was my first thought too, I discarded it because:

  • The cap is 33 tail calls, so it would be around 32 after the initial tail call into the filler. This would not allow us to capture all events anyways in the modern bpf.
  • I couldn't find an easy way to keep track of what number of event we are actually on, that's probably a skill issue on my side, probably using a map or some part of the context would be enough.

but I see the issue that it could cause with the old drivers... we have no easy ways to create multiple headers and send multiple events inside the same filler, all the architecture is built on the assumption of sending just one event per filler... I understand that creating all these helpers just for two syscalls could be an overkill. Send just the first message/tuple could be a hypothesis for the old drivers, even if I don't like it so much. At least in the kernel module, I would like to send all the messages/tuples in some way. I will try to think if we can do something different at least for the kernel module, but I'm not sure about the outcome...

Yeah, I'm not super happy about this implementation either, but as you mention, it would take a pretty big effort to be able to send more than one message. I'm willing to do the effort, just need some pointers on where to start.

Also, I tried adding kernel tests that grab more than one event and it looked like the call to evt_test->assert_event_presence(); was not moving the event forward and instead was validating the previous one, or rather, it could tell there was another event in the buffer, but the internal values were not updated, any ideas why this could happen? Or is the framework meant to work with just one event? Anyways, I'll keep looking into it because I think it's a valuable test to have for these syscalls.

Molter73 avatar Aug 28 '24 08:08 Molter73

Looks like the scap tests are failing because a recvmmsg event doesn't have arguments in the file and it's not able to be parsed correctly, not sure how to fix that, do I need to recreate the scap file altogether?

Molter73 avatar Aug 28 '24 14:08 Molter73

That was my first thought too, I discarded it because:

  • The cap is 33 tail calls, so it would be around 32 after the initial tail call into the filler. This would not allow us to capture all events anyways in the modern bpf.

Yep probably there is no reason why we should choose the tail call approach in the end, the loop seems to do its job :thinking:

  • I couldn't find an easy way to keep track of what number of event we are actually on, that's probably a skill issue on my side, probably using a map or some part of the context would be enough.

Probably it would be enough to check the offset inside the auxmap struct, more in detail the payload_pos to check how much data we have already pushed, or if we need other information, probably yes we need to use or some scratch space in the same map or add an additional map. But again probably is not necessary to go this way

Yeah, I'm not super happy about this implementation either, but as you mention, it would take a pretty big effort to be able to send more than one message. I'm willing to do the effort, just need some pointers on where to start.

I'm not sure it is worth reworking all the architecture for just these 2 new events... if we find a cool way to handle it with add only code ok, but otherwise, I have some doubts...

Also, I tried adding kernel tests that grab more than one event and it looked like the call to evt_test->assert_event_presence(); was not moving the event forward and instead was validating the previous one, or rather, it could tell there was another event in the buffer, but the internal values were not updated, any ideas why this could happen? Or is the framework meant to work with just one event? Anyways, I'll keep looking into it because I think it's a valuable test to have for these syscalls.

Uhm the framework should be able to retrieve more than one event, see here an example https://github.com/falcosecurity/libs/blob/5ed00b2a9a627d9a6263369edfdcbaf5cc6b1485/test/drivers/test_suites/actions_suite/ring_buffer.cpp#L33. If the events are in order we should be able to retrieve them...

To be honest, the best thing to do in the framework would be to scrape all the events in the buffers, save them in a cpp vector, and then search the events we need in the vector. This would provide us with great debug capabilities since we can print all the events we have seen from a specific thread for example and we could easily understand why our event is not there, maybe just a wrong event type

Andreagit97 avatar Aug 30 '24 09:08 Andreagit97

Usually, there are no issues when we only add parameters to an event in the event table. this is a particular case because the previous number of parameters was 0... we should try to understand in which method we are facing this exception

[ RUN      ] scap_file_kexec_arm64.tail_lineage
Trying to open the right engine!
unknown file: Failure
C++ exception with description "vector::_M_range_check: __n (which is 0) >= this->size() (which is 0)" thrown in the test body.
Trying to open the right engine!
[  FAILED  ] scap_file_kexec_arm64.tail_lineage (28 ms)

Andreagit97 avatar Aug 30 '24 09:08 Andreagit97

I'm not sure it is worth reworking all the architecture for just these 2 new events... if we find a cool way to handle it with add only code ok, but otherwise, I have some doubts...

Alright, I refrain from doing anything else in this regard until we have some time to think about it.

Uhm the framework should be able to retrieve more than one event, see here an example

That's weird, I'll give it a few more tries when I get a minute.

Usually, there are no issues when we only add parameters to an event in the event table. this is a particular case because the previous number of parameters was 0... we should try to understand in which method we are facing this exception

I'll try to get a stack trace, I can't remember the exact point it failed at off the top of my head.

Molter73 avatar Aug 30 '24 10:08 Molter73

Maybe I add an idea for the kernel module. In TRACEPOINT_PROBE(syscall_exit_probe, struct pt_regs *regs, long ret) we can add the following code:

        //----------------------------------------------------------------------------- New code
	if(event_pair->exit_event_type == PPME_SOCKET_SENDMMSG_X || 
		event_pair->exit_event_type == PPME_SOCKET_RECVMMSG_X)
	{
		for(...)
		{
			// we need to add a custom logic inside `record_event_all_consumers` for these syscalls to understand which tuple
			// and message we need to send.
			record_event_all_consumers(event_pair->exit_event_type, event_pair->flags, &event_data, KMOD_PROG_SYS_EXIT);
		}
                return;
	}
        //-----------------------------------------------------------------------------
	if (event_pair->flags & UF_USED)
		record_event_all_consumers(event_pair->exit_event_type, event_pair->flags, &event_data, KMOD_PROG_SYS_EXIT);
	else
		record_event_all_consumers(PPME_GENERIC_X, UF_ALWAYS_DROP, &event_data, KMOD_PROG_SYS_EXIT);

we could call the record_event_all_consumers in a for loop and each time we could send a different SENDMMSG event. We probably need to enrich the event_data struct with some information on how many messages we have already sent and what is the next message to send.

For what concern the legacy probe I had no great ideas, probably it's ok to send just the first message and not the others

Andreagit97 avatar Sep 02 '24 08:09 Andreagit97

Neat idea andre! Re old bpf probe, i agree, don't spend too much time on it for now; perhaps we can fix it in the future if needed. Also, having a working solution that let's say kills (through the verifier) like 30% of our support matrix is not gonna work IMHO; if we find a solution, we should fine on that does not break too many verifiers; i think that is the hardest part.

FedeDP avatar Sep 02 '24 09:09 FedeDP

Alright then, I'll try to get the implementation for kmod done when I get a minute. I also still need to go through the last errors in CI.

Molter73 avatar Sep 02 '24 11:09 Molter73

So here is the stacktrace for the scap file unit test that is failing:

#0  0x00007ffff5ea8664 in __pthread_kill_implementation () from /lib64/libc.so.6
#1  0x00007ffff5e4fc4e in raise () from /lib64/libc.so.6
#2  0x00007ffff5e37902 in abort () from /lib64/libc.so.6
#3  0x00007ffff60a5da9 in __gnu_cxx::__verbose_terminate_handler() [clone .cold] () from /lib64/libstdc++.so.6
#4  0x00007ffff60b7c4c in __cxxabiv1::__terminate(void (*)()) () from /lib64/libstdc++.so.6
#5  0x00007ffff60a5951 in std::terminate() () from /lib64/libstdc++.so.6
#6  0x00007ffff60b7ed8 in __cxa_throw () from /lib64/libstdc++.so.6
#7  0x00007ffff60a88df in std::__throw_out_of_range_fmt(char const*, ...) () from /lib64/libstdc++.so.6
#8  0x0000000000e753c9 in std::vector<sinsp_evt_param, std::allocator<sinsp_evt_param> >::_M_range_check (
    this=this@entry=0x7ffff4317a00, __n=__n@entry=0) at /usr/include/c++/14/bits/stl_vector.h:1160
#9  0x0000000000e6bde2 in std::vector<sinsp_evt_param, std::allocator<sinsp_evt_param> >::at (this=0x7ffff4317a00, __n=0)
    at /usr/include/c++/14/bits/stl_vector.h:1180
#10 sinsp_evt::get_param (this=this@entry=0x7ffff43179b8, id=id@entry=0)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/userspace/libsinsp/event.cpp:198
#11 0x0000000001174707 in sinsp_parser::reset (this=this@entry=0x515000009180, evt=evt@entry=0x7ffff43179b8)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/userspace/libsinsp/parsers.cpp:718
#12 0x000000000117ee53 in sinsp_parser::process_event (this=0x515000009180, evt=0x7ffff43179b8)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/userspace/libsinsp/parsers.cpp:90
#13 0x0000000001072167 in sinsp::next (this=0x7ffff4317840, puevt=0x7ffff3c77720)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/userspace/libsinsp/sinsp.cpp:1322
#14 0x0000000000c56032 in scap_file_test_helpers::capture_search_evt_by_type_and_tid (
    inspector=inspector@entry=0x7ffff4317840, type=type@entry=293, tid=tid@entry=107370)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/userspace/libsinsp/test/helpers/scap_file_helpers.cpp:45
#15 0x0000000000c5357d in scap_file_kexec_x86_tail_lineage_Test::TestBody (this=<optimized out>)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/userspace/libsinsp/test/scap_files/kexec_x86/kexec_x86.cpp:39
#16 0x00000000016c0064 in testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void> (
    object=object@entry=0x5020000082f0, method=&virtual testing::Test::TestBody(),
    location=location@entry=0x16cdb91 "the test body")
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/src/gtest.cc:2657
#17 0x00000000016b409a in testing::Test::Run (this=this@entry=0x5020000082f0)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/src/gtest.cc:2674
#18 0x00000000016b4260 in testing::TestInfo::Run (this=0x5120000304c0)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/src/gtest.cc:2853
#19 0x00000000016b43d2 in testing::TestSuite::Run (this=0x512000030640)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/src/gtest.cc:3012
#20 0x00000000016b7f5e in testing::internal::UnitTestImpl::RunAllTests (this=0x516000000680)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/src/gtest.cc:5870
#21 0x00000000016c042f in testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> (
    object=0x516000000680,
    method=(bool (testing::internal::UnitTestImpl::*)(testing::internal::UnitTestImpl * const)) 0x16b7c0e <testing::internal::UnitTestImpl::RunAllTests()>, location=location@entry=0x17c52d0 "auxiliary test code (environments or event listeners)")
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/src/gtest.cc:2657
#22 0x00000000016b40e4 in testing::UnitTest::Run (this=0x2162540 <testing::UnitTest::GetInstance()::instance>)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/src/gtest.cc:5444
#23 0x000000000118d5ec in RUN_ALL_TESTS ()
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/include/gtest/gtest.h:2293
#24 0x000000000118d5d5 in main (argc=<optimized out>, argv=0x7fffffffe2e8)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/build/googletest-src/googletest/src/gtest_main.cc:51

The interesting part is in these 2 lines though:

#10 sinsp_evt::get_param (this=this@entry=0x7ffff43179b8, id=id@entry=0)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/userspace/libsinsp/event.cpp:198
#11 0x0000000001174707 in sinsp_parser::reset (this=this@entry=0x515000009180, evt=evt@entry=0x7ffff43179b8)
    at /home/mmoltras/go/src/github.com/falcosecurity/libs/userspace/libsinsp/parsers.cpp:718

Looks like we are trying to get the file descriptor here and, because it is not set in the scap file, an out of range error is thrown here.

Best I can think of is catching the exception for recvmmsg and sendmmsg in parsers.cpp, and re-throw for other syscalls, or check before accessing for those syscalls, which kinda defeats the purpose of at, WDYT?

Molter73 avatar Sep 03 '24 11:09 Molter73

Codecov Report

Attention: Patch coverage is 78.94737% with 4 lines in your changes missing coverage. Please review.

Project coverage is 75.09%. Comparing base (be080b5) to head (84226b0). Report is 20 commits behind head on master.

Files with missing lines Patch % Lines
userspace/libsinsp/parsers.cpp 78.94% 4 Missing :warning:
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2027      +/-   ##
==========================================
- Coverage   75.10%   75.09%   -0.02%     
==========================================
  Files         274      274              
  Lines       34302    34303       +1     
  Branches     5934     5930       -4     
==========================================
- Hits        25762    25759       -3     
- Misses       8540     8544       +4     
Flag Coverage Δ
libsinsp 75.09% <78.94%> (-0.02%) :arrow_down:

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Sep 03 '24 11:09 codecov[bot]

Hey mauro, can you rebase on top of master? (i know...there is the clang formatter to set up :( ). Anyway, will try to give it a look asap (next week i promise!), and also check the scap file unit test that is failing.

FedeDP avatar Sep 30 '24 15:09 FedeDP

Hey mauro, can you rebase on top of master? (i know...there is the clang formatter to set up :( ). Anyway, will try to give it a look asap (next week i promise!), and also check the scap file unit test that is failing.

Hi Fede! I had to take a bit of a detour and haven't been actively working on the PR, I'll try to get it rebased and the kmod implementation done in the next couple of days, but no promises, I'm a bit swamped at the moment.

I also have to make some changes due to some errors I found when working on this in our fork, again, I'll try to get everything up to date ASAP.

Molter73 avatar Sep 30 '24 16:09 Molter73

Perf diff from master - unit tests

     3.77%     +1.02%  [.] gzfile_read
    10.69%     -0.70%  [.] sinsp::next
     8.62%     -0.58%  [.] sinsp_evt::get_type
     2.86%     +0.54%  [.] sinsp_thread_manager::get_thread_ref
     2.95%     -0.40%  [.] sinsp_parser::process_event
     1.68%     -0.35%  [.] sinsp::fetch_next_event
     5.62%     -0.33%  [.] next_event_from_file
     2.66%     +0.32%  [.] sinsp_thread_manager::find_thread
     0.65%     +0.29%  [.] libsinsp::events::is_unknown_event
     0.93%     +0.29%  [.] sinsp_evt::get_direction

Heap diff from master - unit tests

peak heap memory consumption: 0B
peak RSS (including heaptrack overhead): 0B
total memory leaked: 0B

Heap diff from master - scap file

peak heap memory consumption: 0B
peak RSS (including heaptrack overhead): 0B
total memory leaked: 0B

Benchmarks diff from master

Comparing gbench_data.json to /root/actions-runner/_work/libs/libs/build/gbench_data.json
Benchmark                                                         Time             CPU      Time Old      Time New       CPU Old       CPU New
----------------------------------------------------------------------------------------------------------------------------------------------
BM_sinsp_split_mean                                            +0.0004         +0.0004           149           149           149           149
BM_sinsp_split_median                                          +0.0039         +0.0039           149           150           149           150
BM_sinsp_split_stddev                                          +3.3417         +3.3457             0             2             0             2
BM_sinsp_split_cv                                              +3.3401         +3.3440             0             0             0             0
BM_sinsp_concatenate_paths_relative_path_mean                  -0.0026         -0.0026            56            56            56            56
BM_sinsp_concatenate_paths_relative_path_median                -0.0012         -0.0011            56            56            56            56
BM_sinsp_concatenate_paths_relative_path_stddev                -0.6211         -0.6214             0             0             0             0
BM_sinsp_concatenate_paths_relative_path_cv                    -0.6201         -0.6204             0             0             0             0
BM_sinsp_concatenate_paths_empty_path_mean                     +0.0333         +0.0333            24            25            24            25
BM_sinsp_concatenate_paths_empty_path_median                   +0.0300         +0.0300            24            25            24            25
BM_sinsp_concatenate_paths_empty_path_stddev                   +1.5733         +1.5662             0             0             0             0
BM_sinsp_concatenate_paths_empty_path_cv                       +1.4905         +1.4836             0             0             0             0
BM_sinsp_concatenate_paths_absolute_path_mean                  +0.0360         +0.0360            55            57            55            57
BM_sinsp_concatenate_paths_absolute_path_median                +0.0384         +0.0384            55            58            55            58
BM_sinsp_concatenate_paths_absolute_path_stddev                +0.3196         +0.3191             0             0             0             0
BM_sinsp_concatenate_paths_absolute_path_cv                    +0.2738         +0.2733             0             0             0             0
BM_sinsp_split_container_image_mean                            +0.0114         +0.0114           384           388           384           388
BM_sinsp_split_container_image_median                          +0.0168         +0.0168           382           389           382           389
BM_sinsp_split_container_image_stddev                          -0.4058         -0.4055             4             2             4             2
BM_sinsp_split_container_image_cv                              -0.4125         -0.4122             0             0             0             0

github-actions[bot] avatar Oct 01 '24 10:10 github-actions[bot]

Feel free to ping me when you feel this is ready and i'll give it a proper review! Thank you very much!

FedeDP avatar Oct 01 '24 12:10 FedeDP

Hey @FedeDP!

I haven't had time to do the implementation proposed to support multiple events in the kernel module, I might come back to it at a later point, that PR has become way bigger than I intended it to be and would appreciate a review (even if it's a quick one) before I keep adding to it, hope that's ok.

I also expect CI to not be super happy, so I'll look into any issues I might stumble with and would appreciate help understanding if these changes require a major or minor version bump to the schema and/api driver versions.

Molter73 avatar Oct 03 '24 11:10 Molter73

Oh and indeed the CI is super happy :)

BTW i just started kernel tests against this PR: https://github.com/falcosecurity/libs/actions/runs/11380128079 Will post the results once it finishes.

FedeDP avatar Oct 17 '24 07:10 FedeDP

ARM64:

KERNEL CMAKE-CONFIGURE KMOD BUILD KMOD SCAP-OPEN BPF-PROBE BUILD BPF-PROBE SCAP-OPEN MODERN-BPF SCAP-OPEN
amazonlinux2-5.4 🟢 🟢 🟢 🟢 🟢 🟡
amazonlinux2022-5.15 🟢 🟢 🟢 🟢 🟢 🟢
fedora-6.2 🟢 🟢 🟢 🟢 🟢 🟢
oraclelinux-4.14 🟢 🟢 🟢 🟡 🟡 🟡
oraclelinux-5.15 🟢 🟢 🟢 🟢 🟢 🟢
ubuntu-6.5 🟢 🟢 🟢 🟢 🟢 🟢

AMD64:

KERNEL CMAKE-CONFIGURE KMOD BUILD KMOD SCAP-OPEN BPF-PROBE BUILD BPF-PROBE SCAP-OPEN MODERN-BPF SCAP-OPEN
amazonlinux2-4.19 🟢 🟢 🟢 🟢 🟢 🟡
amazonlinux2-5.10 🟢 🟢 🟢 🟢 🟢
amazonlinux2-5.15 🟢 🟢 🟢 🟢 🟢 🟢
amazonlinux2-5.4 🟢 🟢 🟢 🟢 🟢 🟡
amazonlinux2022-5.15 🟢 🟢 🟢 🟢 🟢 🟢
amazonlinux2023-6.1 🟢 🟢 🟢 🟢 🟢 🟢
archlinux-6.0 🟢 🟢 🟢 🟢 🟢 🟢
archlinux-6.7 🟢 🟢 🟢 🟢 🟢 🟢
centos-3.10 🟢 🟢 🟢 🟡 🟡 🟡
centos-4.18 🟢 🟢 🟢 🟢 🟢 🟢
centos-5.14 🟢 🟢 🟢 🟢 🟢 🟢
fedora-5.17 🟢 🟢 🟢 🟢 🟢 🟢
fedora-5.8 🟢 🟢 🟢 🟢 🟢
fedora-6.2 🟢 🟢 🟢 🟢 🟢 🟢
oraclelinux-3.10 🟢 🟢 🟢 🟡 🟡 🟡
oraclelinux-4.14 🟢 🟢 🟢 🟢 🟢 🟡
oraclelinux-5.15 🟢 🟢 🟢 🟢 🟢 🟢
oraclelinux-5.4 🟢 🟢 🟢 🟢 🟢 🟡
ubuntu-4.15 🟢 🟢 🟢 🟢 🟢 🟡
ubuntu-5.8 🟢 🟢 🟢 🟢 🟢 🟡
ubuntu-6.5 🟢 🟢 🟢 🟢 🟢 🟢

New failure spotted for modern_ebpf for amazonlinux 5.10 and fedora 5.8 :/ Not many info though: Msg:

non-zero return code

Err:

 (22)

FedeDP avatar Oct 17 '24 07:10 FedeDP

For what concerns the verifier failures, using a vanilla kernel 5.8 I can see the following error on this branch :thinking:

libbpf: prog 'recvmmsg_x': BPF program load failed: Invalid argument
libbpf: prog 'recvmmsg_x': -- BEGIN PROG LOAD LOG --
unrecognized bpf_ld_imm64 insn
processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
-- END PROG LOAD LOG --
libbpf: prog 'recvmmsg_x': failed to load: -22
libbpf: failed to load object 'bpf_probe'
libbpf: failed to load BPF skeleton 'bpf_probe': -22
 (22)
libpman: failed to load BPF object (errno: 22 | message: Invalid argument)

I used a default Ubuntu debug kernel not the Fedora one in our CI but probably the error is the same if you use virtme-ng vng -r v5.8 is enough to obtain a repro

Andreagit97 avatar Nov 07 '24 15:11 Andreagit97

For what concerns the verifier failures, using a vanilla kernel 5.8 I can see the following error on this branch 🤔

libbpf: prog 'recvmmsg_x': BPF program load failed: Invalid argument
libbpf: prog 'recvmmsg_x': -- BEGIN PROG LOAD LOG --
unrecognized bpf_ld_imm64 insn
processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
-- END PROG LOAD LOG --
libbpf: prog 'recvmmsg_x': failed to load: -22
libbpf: failed to load object 'bpf_probe'
libbpf: failed to load BPF skeleton 'bpf_probe': -22
 (22)
libpman: failed to load BPF object (errno: 22 | message: Invalid argument)

I used a default Ubuntu debug kernel not the Fedora one in our CI but probably the error is the same if you use virtme-ng vng -r v5.8 is enough to obtain a repro

I've seen this error on some older kernels before, but I couldn't figure out how to fix it, I think it's related to the fact that the callbacks used in bpf_loop are not inlined and the regular for loop attempts to make the call, except that wasn't possible in 5.8 due to that instruction not being implemented. I'm not really sure how we could work around this.

Molter73 avatar Nov 07 '24 16:11 Molter73

Uhm if it is related to the legacy loop maybe we could replace it with a chain of tail calls + a per-cpu map to save the state. The maximum number of tail calls is 32 but in the end, this is the same limit we have with the legacy loop so we should be fine

Andreagit97 avatar Nov 07 '24 19:11 Andreagit97

Uhm if it is related to the legacy loop maybe we could replace it with a chain of tail calls + a per-cpu map to save the state. The maximum number of tail calls is 32 but in the end, this is the same limit we have with the legacy loop so we should be fine

It's not the legacy loop per se, it's the fact that bpf_loop requires a callback, so the handle_exit function cannot be inlined. If we turn it into a tail call chain, but we still want to re-use the handle_exit method, the verifier will still complain. The alternative would be to completely duplicate the code, but that's just painful.

I could maybe abuse macros, but all the code from handle_exit into one and then use the macro on the legacy loop and handle_exit... But that would physically hurt me.

Molter73 avatar Nov 08 '24 09:11 Molter73

uhm got it, what about something like this?

static always_inline long handle_exit__inline(uint32_t index, void *ctx)  {}

static long handle_exit(uint32_t index, void *ctx) {
     handle_exit__inline(index,ctx);
}

from the legacy loop we can call directly the inlined one

Andreagit97 avatar Nov 08 '24 11:11 Andreagit97

uhm got it, what about something like this?

static always_inline long handle_exit__inline(uint32_t index, void *ctx)  {}

static long handle_exit(uint32_t index, void *ctx) {
     handle_exit__inline(index,ctx);
}

from the legacy loop we can call directly the inlined one

I'll try it

Molter73 avatar Nov 08 '24 16:11 Molter73

uhm got it, what about something like this?

static always_inline long handle_exit__inline(uint32_t index, void *ctx)  {}

static long handle_exit(uint32_t index, void *ctx) {
     handle_exit__inline(index,ctx);
}

from the legacy loop we can call directly the inlined one

I'll try it

Tried this and now I'm getting this error on a 6.11 kernel:

* Configure modern BPF probe tests!
* Using buffer dim: 8388608
libbpf: prog 'recvmmsg_x': BPF program load failed: Permission denied
libbpf: prog 'recvmmsg_x': -- BEGIN PROG LOAD LOG --
combined stack size of 2 calls is 656. Too large
processed 5830 insns (limit 1000000) max_states_per_insn 4 total_states 250 peak_states 250 mark_read 57
-- END PROG LOAD LOG --
libbpf: prog 'recvmmsg_x': failed to load: -13
libbpf: failed to load object 'bpf_probe'
libbpf: failed to load BPF skeleton 'bpf_probe': -13
libpman: failed to load BPF object (errno: 13 | message: Permission denied)Unable to open the engine:

Molter73 avatar Nov 13 '24 16:11 Molter73

uhm so we moved the issue from 5.8 to 6.11... I need to check but this combined stack size of 2 calls is 656. Too large sounds like the verifier thinks it's possible to have tail calls + a call to a static function not inline, have you tried with an if/else.

if(bpf_core_enum_value_exists(enum bpf_func_id, BPF_FUNC_loop)) {
		uint32_t nr_loops = ret < 1024 ? ret : 1024;
		bpf_loop(nr_loops, handle_exit, &data, 0);
		return 0;
} else {
   // Tail calls
}

In this way if BPF_FUNC_loop is defined the verifier should prune the else considering it dead code

Andreagit97 avatar Nov 15 '24 14:11 Andreagit97

it's possible to have tail calls + a call to a static function not inline

This is possible, the only catch is the allowed stack size is reduced to 256 bytes I think, but it will still not work on 5.8 because of the ld_imm64 instruction is not implemented, preventing the call from happening. I can try to test it, but implementing this same logic with tail calls sounds like a nightmare to me.

Molter73 avatar Nov 18 '24 11:11 Molter73