Switch from pcre to re2
Would an updated https://github.com/falconindy/pkgfile/pull/65 to compare to be helpful?
This looks good, but I'm concerned that there's a fairly substantial performance hit from the switch. I'm surprised, because rsc tends to gush about re2 performance, specifically compared to pcre. Are you seeing the same? re2 is pretty simple -- I don't know that there's anything we can tweak to narrow the gap.
I am also seeing the same noticeable slow down compared to either pcre1 or pcre2.
I am just a random passerby, but out of curiosity, how to reproduce the slowdown you are seeing?
At least in my initial testing, at least 10 ms penalty comes simply from relocations after dynamic linking against re2 (it isn't visible when one links statically).
$ cat test.c
int main() { return 0; }
$ gcc test.c -O2 -o re2 -lre2
$ gcc test.c -O2 -o pcre -lpcre
$ hyperfine --warmup 10 -N ./re2 ./pcre
Benchmark 1: ./re2
Time (mean ± σ): 11.5 ms ± 0.8 ms [User: 7.3 ms, System: 3.8 ms]
Range (min … max): 10.4 ms … 14.5 ms 207 runs
Benchmark 2: ./pcre
Time (mean ± σ): 866.7 µs ± 129.8 µs [User: 328.9 µs, System: 368.8 µs]
Range (min … max): 729.4 µs … 2728.0 µs 3517 runs
$ LD_DEBUG=statistics ./re2
256691:
256691: runtime linker statistics:
256691: total startup time in dynamic loader: 5963881 cycles
256691: time needed for relocation: 3146072 cycles (52.7%)
256691: number of relocations: 4806
256691: number of relocations from cache: 1814
256691: number of relative relocations: 3
256691: time needed to load objects: 2506057 cycles (42.0%)
256691:
256691: runtime linker statistics:
256691: final number of relocations: 4806
256691: final number of relocations from cache: 1814
$ LD_DEBUG=statistics ./pcre
256648:
256648: runtime linker statistics:
256648: total startup time in dynamic loader: 256859 cycles
256648: time needed for relocation: 13765 cycles (5.3%)
256648: number of relocations: 133
256648: number of relocations from cache: 9
256648: number of relative relocations: 28
256648: time needed to load objects: 105338 cycles (41.0%)
256648:
256648: runtime linker statistics:
256648: final number of relocations: 133
256648: final number of relocations from cache: 9
Well, it's been a while, but I don't believe that I was profiling any specific -- just a basic regex-flavored search and measuring overall walltime. And honestly, that's the metric that matters to the end user in the current implementation of pkgfile regardless of where the walltime comes from.
I did some more profiling and I think I have some lead where the slowdown comes from.
For a simple pkgfile.re2 -vr zless search, I get following perf report output:
Children Self Command Shared Object Symbol
+ 97.85% 0.00% pkgfile.re2.mt [unknown] [.] 0x666e6f632e6e616d
+ 97.85% 0.00% pkgfile.re2.mt [unknown] [.] 0x0000000000000010
+ 97.16% 0.41% pkgfile.re2.mt pkgfile.re2.mt [.] pkgfile::Pkgfile::SearchMetafile(std::__cxx11::basic_string<char, std::char_traits<char>, std::
+ 97.04% 0.35% pkgfile.re2.mt libre2.so.11.0.0 [.] re2::RE2::PartialMatchN(std::basic_string_view<char, std::char_traits<char> >, re2::RE2 const&,
+ 96.63% 0.52% pkgfile.re2.mt libre2.so.11.0.0 [.] re2::RE2::DoMatch(std::basic_string_view<char, std::char_traits<char> >, re2::RE2::Anchor, unsi
+ 95.71% 1.56% pkgfile.re2.mt libre2.so.11.0.0 [.] re2::RE2::Match(std::basic_string_view<char, std::char_traits<char> >, unsigned long, unsigned
+ 43.85% 43.81% pkgfile.re2.mt libabsl_synchronization.so.2508.0.0 [.] absl::lts_20250814::Mutex::lock_shared()
+ 42.51% 42.51% pkgfile.re2.mt libabsl_synchronization.so.2508.0.0 [.] absl::lts_20250814::Mutex::unlock_shared()
...
If I am reading this correctly, around 80% of the time is spent in absl::lts_20250814::Mutex (un)locking.
Now I didn't initially plan for this, I only wanted to have cleaner output, but when I compiled pkgfile with multithreading practically disabled, but surprisingly I got a performance boost when using re2 with only one thread.
--- a/src/pkgfile.cc
+++ b/src/pkgfile.cc
@@ -240,7 +240,7 @@ int Pkgfile::SearchRepoChunks(Database::RepoChunks repo_chunks,
}
const auto num_workers =
- std::min<int>(std::thread::hardware_concurrency(), queue.size());
+ std::min<int>(1, queue.size());
std::vector<std::thread> workers;
workers.reserve(num_workers);
It might be, because now, mutex (un)locking takes "only" ~23% of our time.
Children Self Command Shared Object Symbol
+ 82.94% 0.00% pkgfile.re2.st [unknown] [.] 0x666e6f632e6e616d
+ 82.94% 0.00% pkgfile.re2.st [unknown] [.] 0x0000000000000010
+ 76.55% 1.62% pkgfile.re2.st pkgfile.re2.st [.] pkgfile::Pkgfile::SearchMetafile(std::__cxx11::basic_string<char, std::char_traits<char>, std::
+ 76.41% 2.66% pkgfile.re2.st libre2.so.11.0.0 [.] re2::RE2::PartialMatchN(std::basic_string_view<char, std::char_traits<char> >, re2::RE2 const&,
+ 74.58% 3.33% pkgfile.re2.st libre2.so.11.0.0 [.] re2::RE2::DoMatch(std::basic_string_view<char, std::char_traits<char> >, re2::RE2::Anchor, unsi
+ 68.80% 13.59% pkgfile.re2.st libre2.so.11.0.0 [.] re2::RE2::Match(std::basic_string_view<char, std::char_traits<char> >, unsigned long, unsigned
+ 16.14% 16.14% pkgfile.re2.st libabsl_synchronization.so.2508.0.0 [.] absl::lts_20250814::Mutex::lock_shared()
+ 11.71% 0.00% pkgfile.re2.st libre2.so.11.0.0 [.] 0x00007fb56b48ed4a
+ 7.73% 7.73% pkgfile.re2.st libabsl_synchronization.so.2508.0.0 [.] absl::lts_20250814::Mutex::unlock_shared()
...
Here are my hyperfine results (st is single threaded and mt is multithreaded):
| Command | Mean [s] | Min [s] | Max [s] | Relative |
|---|---|---|---|---|
./pkgfile.pcre.st -D /tmp/pkgfileCachedir -vr zless |
1.529 ± 0.029 | 1.498 | 1.572 | 4.01 ± 0.10 |
./pkgfile.re2.st -D /tmp/pkgfileCachedir -vr zless |
1.330 ± 0.061 | 1.256 | 1.411 | 3.49 ± 0.17 |
./pkgfile.pcre.mt -D /tmp/pkgfileCachedir -vr zless |
0.381 ± 0.006 | 0.370 | 0.393 | 1.00 |
./pkgfile.re2.mt -D /tmp/pkgfileCachedir -vr zless |
2.146 ± 0.251 | 1.843 | 2.399 | 5.63 ± 0.67 |
Full hyperfine output
❯ hyperfine --export-markdown results.md --warmup=3 --parameter-list engine pcre,re2 --parameter-list threading st,mt './pkgfile.{engine}.{threading} -D /tmp/pkgfileCachedir
-vr zless'
Benchmark 1: ./pkgfile.pcre.st -D /tmp/pkgfileCachedir -vr zless
Time (mean ± σ): 1.529 s ± 0.029 s [User: 1.444 s, System: 0.080 s]
Range (min … max): 1.498 s … 1.572 s 10 runs
Benchmark 2: ./pkgfile.re2.st -D /tmp/pkgfileCachedir -vr zless
Time (mean ± σ): 1.330 s ± 0.061 s [User: 1.241 s, System: 0.084 s]
Range (min … max): 1.256 s … 1.411 s 10 runs
Benchmark 3: ./pkgfile.pcre.mt -D /tmp/pkgfileCachedir -vr zless
Time (mean ± σ): 381.2 ms ± 6.2 ms [User: 2577.6 ms, System: 125.6 ms]
Range (min … max): 370.0 ms … 393.3 ms 10 runs
Benchmark 4: ./pkgfile.re2.mt -D /tmp/pkgfileCachedir -vr zless
Time (mean ± σ): 2.146 s ± 0.251 s [User: 16.077 s, System: 0.146 s]
Range (min … max): 1.843 s … 2.399 s 10 runs
Summary
./pkgfile.pcre.mt -D /tmp/pkgfileCachedir -vr zless ran
3.49 ± 0.17 times faster than ./pkgfile.re2.st -D /tmp/pkgfileCachedir -vr zless
4.01 ± 0.10 times faster than ./pkgfile.pcre.st -D /tmp/pkgfileCachedir -vr zless
5.63 ± 0.67 times faster than ./pkgfile.re2.mt -D /tmp/pkgfileCachedir -vr zless
I don't have the C++ skills to do this, but it might be beneficial to test whether having copies of RE2 objects for each thread would be beneficial.
Yeah, this analysis is spot on. Can confirm that if I build the RE2 object within each worker then I get performance which is at least equivalent to pcre.
This is annoying, because:
- I'd really only like to compile the RE once.
- There's no way to copy an RE2 object.
- Adding error handling within the worker pool just for this case feels wrong.
- Using the
RE2::PartialMatch(string_view, string_view)overload is a non-starter (too damn slow, naturally)