zig icon indicating copy to clipboard operation
zig copied to clipboard

optimized memcpy

Open dweiller opened this issue 1 year ago • 22 comments

This PR does a few things to improve memcpy performance:

  • branch for small copy lengths
  • when source and dest have the same address modulo suggested vector size, long copies are done via vectors suggested by std.simd.suggestVectorLength(u8)
  • when source and dest have different addresses modulo suggested vector size, the load is done from an aligned vector and the store to an align(1) vector

It is also possible to use aligned vector loads and stores in the misaligned case, but I have not found a nice way to make the compiler do it (for x86_64), and would rather not do it with inline asm. The simplest way to do it (using @shuffle) does improve performance further, however it causes massive bloat in the memcpy function (producing >12x more code[^1] than without it on my avx2 machine) due to the need for a comptime mask.

Other stuff that can be done (either before merging or as follow up issues):

  • [ ] check performance on other architectures (I don't have access to such a machine, so someone else would need to do this)
  • [ ] check performance with more restrictive sets of vector instructions available
  • [ ] see if there is a reasonable way to do aligned vector moves for misaligned case (doubtful without inline asm)
  • [ ] benchmark copies with random sizes

but these (in particular the first and third) may not need to be blockers.

Here is a graph or the performance of master and 9a821382fd in a microbenchmark. The benchmark times @memcpy for a specific length and alignment (mod 32) of source and destination 100 000 times in a loop; this is done for all 256 combinations of source and destination alignment and the average time per iteration across all alignment combinations is the reported result.

avg-0-512-100_000-2

We probably want to focus on optimizing small memcpy lengths as it seems that small lengths of memcpy are the most frequent (some reported distributions can be found at here and this google paper which llvm seems to use distributions from for benchmarking). For small lengths, the above graph indicates between a modest and significant (up to around 3x) improvement, depending on the length.

For code size of memcpy:

optimize mode master 103b885fc6 (B) 8dcabdd08f (B)
Debug 143 164
ReleaseSafe 251 259
ReleaseFast 251 259
ReleaseSmall 32 27

Note that I have only checked the performance of this change on my (x86_64, avx2) machine - it would be good to check on other architectures as well if someone with one would like to test it. The micro benchmark used for the graph above can be found here (it's the 'average' one). In addition this benchmark will have branch predictors trained for the size of the copy - it would be good to benchmark random sizes as well.

[^1]: In ReleaseFast, using @shuffle to do aligned vector loads and stores in the misaligned case made memcpy 10204 bytes.

dweiller avatar Feb 13 '24 09:02 dweiller

Nice work! Could you share the benchmark script? I am really curious to see how those aligned loads/stores are emitted.

Rexicon226 avatar Feb 13 '24 09:02 Rexicon226

Nice work! Could you share the benchmark script? I am really curious to see how those aligned loads/stores are emitted.

The benchmark script is just a pretty simple loop copying from one buffer to the other a bunch of times for each source alignment offset. I haven't done anything clever in the micro benchmark. The reason aligned vector moves are emitted is the use of @ptrCast to get pointers of type [*]CopyType/[*]const CopyType in lib/compiler_rt/memcpy.zig.

Here's the micro benchmark (doesn't look like I can colour it inside the fold):
pub fn main() !void {
    const allocator = std.heap.page_allocator;
const args = try std.process.argsAlloc(allocator);
defer allocator.free(args);
if (args.len > 3 or args.len < 2) return error.BadArgs;

const iterations = try std.fmt.parseInt(usize, args[1], 10);

const copy_len = if (args.len >= 3)
    try std.fmt.parseInt(usize, args[2], 10)
else
    std.mem.page_size;

std.debug.print("copying blocks of size {d}\n", .{std.fmt.fmtIntSizeBin(copy_len)});

const alignment = if (std.simd.suggestVectorLength(u8)) |vec_size|
    @alignOf(@Type(.{ .Vector = .{ .child = u8, .len = vec_size } }))
else
    @alignOf(usize);

var times: [alignment]u64 = undefined;

// preheat
try iteration(allocator, alignment, copy_len, alignment - 2, &times[0], iterations);

for (0..alignment) |s_offset| {
    try iteration(allocator, alignment, copy_len, s_offset, &times[s_offset], iterations);
    std.debug.print(
        "s_offset: {d}, time: {}\n",
        .{
            s_offset,
            std.fmt.fmtDuration(times[s_offset]),
        },
    );
}

var avg: u128 = 0;
for (times) |t| {
    avg += t;
}
avg /= times.len;

const final: u64 = @intCast(avg);

const throughput: u64 = @intCast((@as(u128, copy_len) * iterations * std.time.ns_per_s) / final);

std.debug.print("average time: {}, throughput: {d:.2}\n", .{
    std.fmt.fmtDuration(final),
    std.fmt.fmtIntSizeBin(throughput),
});

}

inline fn iteration( allocator: Allocator, comptime alignment: comptime_int, copy_len: usize, s_offset: usize, p_time: *u64, iterations: usize, ) !void { const src = try allocator.alignedAlloc(u8, alignment, copy_len + s_offset); defer allocator.free(src);

const dest = try allocator.alignedAlloc(u8, alignment, copy_len);
defer allocator.free(dest);

var timer = try std.time.Timer.start();

var i: usize = 0;
while (i < iterations) : (i += 1) {
    @memcpy(dest[0..copy_len], src[s_offset..][0..copy_len]);
}

const time = timer.read();

p_time.* = time;

}

const std = @import("std"); const Allocator = std.mem.Allocator;

dweiller avatar Feb 13 '24 09:02 dweiller

Interesting. I have attempted to re-produce this and I see 2, maybe 3 issues.

  1. This generates only like 1 pair of aligned loads, when you could be fitting in way more. See: https://zig.godbolt.org/z/YxEPvssMa

  2. I simply cannot replicate your results using the benchmark script you provided. On my AVX2 machine, I see a consistent 30% worse perf on your function. I did these measurements by simply compiling a version of LLVM Zig with both compiler_rt versions. Furthermore, I wrote my own benchmark script, and also cannot replicate it. However there I was measuring how many cycles it takes to copy X amount of bytes. Very comparable results to the benchmark script your provided. Note: all data collected was normalized and the mean was found with a rolling average of 50 data points over 10kb total range.

  3. If you remember our discussion about this before, my personal memcpy function was faster, and that still seems to be the case. Please do not interpret the before sentence as criticism or containing any mal-intent, I am simply stating the facts I observe. I believe it is because my version contains more aligned vector loads squeezed in and does a better job at managing pre-fetching.

I recommend you use something like:

dest[0..v].* = @as(*const @Vector(v, u8), @alignCast(@ptrCast(aligned_src[offset..][0..v]))).*;

to generate your aligned vector loads, 😉

Rexicon226 avatar Feb 13 '24 10:02 Rexicon226

1. This generates only like 1 pair of aligned loads, when you could be fitting in way more. See: https://zig.godbolt.org/z/YxEPvssMa

3. ... I believe it is because my version contains more aligned vector loads squeezed in and does a better job at managing pre-fetching.

I am not convinced this is the case. It is trivial to produce a longer sequence of aligned move operations by unrolling the loop - though this does neccesitate extra cleanup code - and when I did this in the past it did not have much impact on performance. Unless I get benchmark results clearly showing a win that seems generalizable I wouldn't want to do it; forcing this sort of unrolling seems more likely to trip up the optimizer and may only positively impact performance on the machine you run the benchmark on while degrading performance on other machines.

2. I simply cannot replicate your results using the benchmark script you provided.

If you let me know precisely how you compiled/ran it I will try on my machine as well. Particular things of interest would be how many iterations you did, the copy length, target cpu features, and optimize mode.

2. I see a consistent 30% worse perf on your function. I did these measurements by simply compiling a version of LLVM Zig with both compiler_rt versions.

Can you be more specific about what you did when you saw worse performance relative to master so I can investigate? I have also used compiling the zig compiler itself as a test and saw a minor (not really sure it was outside uncertainty) improvement in compile time.

2. Furthermore, I wrote my own benchmark script, and also cannot replicate it. However there I was measuring how many cycles it takes to copy X amount of bytes. Very comparable results to the benchmark script your provided.

Can you post your benchmark? I'm not sure what you mean by "...and also cannot replicate it" along with "Very comparable results to the benchmark script your provided." - those read as contradictory statements to me, so I must be misinterpretting something.

dweiller avatar Feb 13 '24 12:02 dweiller

Looks like this somehow broke a bunch of non-x86_64 targets in the linux CI. I wonder if on those targets LLVM is making memcpy produce a call to itself...

dweiller avatar Feb 13 '24 12:02 dweiller

I recommend you use something like:

dest[0..v].* = @as(*const @Vector(v, u8), @alignCast(@ptrCast(aligned_src[offset..][0..v]))).*;

to generate your aligned vector loads, 😉

I assume you mean @as(*Vector(V, u8), @alignCast(@ptrCast(dest[0..v]))).* = ...? Otherwise that is going to produce an unaligned store. Using this particular snippet to perform the cast (casting the dest as well) doesn't make a difference.

dweiller avatar Feb 13 '24 13:02 dweiller

I wonder if on those targets LLVM is making memcpy produce a call to itself...

It looks like this is the issue, at least for wasm32-wasi. I checked a test.wasm file produced by zig build test with wasm2wat and memcpy was indeed making a recursive call to itself. It's going to be pretty tedious if we have to rewrite memcpy for each target according to whether LLVM does this, or find a way to write it so it doesn't do this on all targets simultaneously.

If anyone knows of a way to trace wasm code produced back to source lines, similar to objdump --disassemble -S that would be helpful for working around this.

dweiller avatar Feb 13 '24 13:02 dweiller

I suspect zig’s memcpy could get a whole lot faster than 30%-ish. I think it’s hard to rule out measurement noise without also having a complete benchmark snippet for others to run and on larger amounts of data. There must already be comprehensive memcpy benchmarks online somewhere you could copy from

Jarred-Sumner avatar Feb 13 '24 15:02 Jarred-Sumner

I suspect zig’s memcpy could get a whole lot faster than 30%-ish.

For small sizes this is certainly true, but for large copies I'm a bit skeptical, at least not without using inline asm with something like rep movsb, but that opens up a bit of a can of worms.

dweiller avatar Feb 14 '24 03:02 dweiller

I've made a bunch of improvements yielding both better performance and smaller code size. The table in the top post has been updated.

Edit: Not sure what the problem with riscv64-linux is in the CI - I've disassembled it and memcpy and is not doing a recursive call.

dweiller avatar Feb 14 '24 06:02 dweiller

Edit: Not sure what the problem with riscv64-linux is in the CI - I've disassembled it and memcpy and is not doing a recursive call.

This is wrong, recursive jalr instructions are generated for the case 16 <= copy_len < 32, to try and do a copy of length 16.

dweiller avatar Feb 15 '24 04:02 dweiller

The windows CI failure is CMake Error: CMake was unable to find a build program corresponding to "Ninja". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool. Not sure why that it happening.

dweiller avatar Feb 19 '24 05:02 dweiller

@dweiller I just ran into that in a different PR as well, seems to be a fluke.

RossComputerGuy avatar Feb 19 '24 05:02 RossComputerGuy

The windows CI failure seems to be the compiler running out of memory - not sure why this would happen, it shouldn't be caused by this PR.

dweiller avatar Mar 15 '24 23:03 dweiller

A decent amount of the code could be simplified using comptime into something like:

// shared (dst, src)

copy(blk, offsets):
  for i in offsets: 
    dst[i..][0..blk].* = src[i..][0..blk].*

memcpy(len):
  if len == 0: return
  if len < 4: return copy(1, {0, len/2, len-1})

  v = max(4, suggestVectorSize(u8) or 0)
  inline for n in ctz(4)..ctz(v):
    if len <= 1 << (n+1): // @expect(false)
      return copy(1<<n, {0, len - (1<<n)})
      
  for i in 0..(len / v): copy(v, .{ i * v }) // 4-way auto-vec
  copy(v, .{ len - v })

But I think (at a higher level) we should go all the way: Facebook's folly memcpy is heavily optimized but also serves as their memmove since it's so similar. For sizes which cant be handled by a single copy above (anything after the inline for), they detect if src/dst overlap and switch to something similar but copied backwards.

Interestingly. they never use aligned loads since it doesn't seem worth it. Copy-forward does both load/store unaligned, but copy-backward uses aligned stores with @prefetch(dst_to_write, .{ .rw = .write, .locality = 3 }). They also switch to rep movsb (it's like builtin CPU memcpy) if the conditions align for it to be fast (len >= 1024, pointers dont overlap, ERMSB cpuid enabled).

The general strategy of having memcpy & memmove be the same thing would be nice (perf & maintenance wise). Fast memset is also the same as fast memcpy but replace loads of src with a pre-@splat block/vector/register of the same byte.

kprotty avatar May 10 '24 00:05 kprotty

A decent amount of the code could be simplified using comptime into something like:

This looks pretty nice, I'll try it out.

But I think (at a higher level) we should go all the way: Facebook's folly memcpy is heavily optimized but also serves as their memmove since it's so similar. For sizes which cant be handled by a single copy above (anything after the inline for), they detect if src/dst overlap and switch to something similar but copied backwards.

Our memcpy assumes that src/dst don't overlap, so a change like that would be a (non breaking, but significant) change in semantics that would affect performance.

Interestingly. they never use aligned loads since it doesn't seem worth it. Copy-forward does both load/store unaligned, but copy-backward uses aligned stores with @prefetch(dst_to_write, .{ .rw = .write, .locality = 3 }).

On my machine, IIRC, the aligned ops did affect performance, but this would also be something machine dependent. I have seen that the current wisdom seems to be that modern x86_64 doesn't really care (but for some reason haven't seen when this started being the case), but what about other platforms? At least for a general algorithm, I would think we should use aligned ops, and if/when we want to individually optimise different platforms we could use unaligned ops. I did also try it unrolling with prefetch, but didn't want to over-optimise for my machine - I can't recall how much difference it made for me.

They also switch to rep movsb (it's like builtin CPU memcpy) if the conditions align for it to be fast (len >= 1024, pointers dont overlap, ERMSB cpuid enabled).

I did some research into rep movsb and read that the benefit and breakpoint for when it's beneficial is highly machine dependent, so I thought I'd leave using it for a future enhancement.

Fast memset is also the same as fast memcpy but replace loads of src with a pre-@splat block/vector/register of the same byte.

I do actually have a memset branch locally as well - I could include it in this PR, but I haven't put as much work into it yet.

dweiller avatar May 10 '24 01:05 dweiller

Our memcpy assumes that src/dst don't overlap, so a change like that would be a (non breaking, but significant) change in semantics that would affect performance.

I rather mean to keep memcpy, the function with noalias and such, but have it just call memmove internally. This should keep the noalias optimizations applied by compiler replacing memcpy, but keep the vector-based / branch-optimized version at runtime.

On my machine, IIRC, the aligned ops did affect performance

Was this in relation to the results above? I think the benchmark could report throughput in cycles/byte rather than ns. This is something used in benchmarks like smhasher; hash functions do similar reading/branching opts and take a small amount of time for short sizes. Combine this with not clflushing dst/src and not cpu-core pinning, I'd assume ns-based time fluctuates more even with high iterations.

I only mention this as aligned loads didn't seem to have an effect on other benchmarks so trying to somehow discover/rationalize the difference in the results. TBF, also haven't tested on anything outside avx2, avx512f, and apple_m1.

rep movsb and read that the benefit and breakpoint for when it's beneficial is highly machine dependent

Indeed looks like it depends on micro-architecture specifically (being finnicky on zen2+). Seemed to be same speed as the normal vectorized loop at least on 5600x and 6900HS.

I do actually have a memset branch locally as well - I could include it in this PR

Yea a separate PR would be best. Just wanted to mention their similarity.

kprotty avatar May 10 '24 02:05 kprotty

I rather mean to keep memcpy, the function with noalias and such, but have it just call memmove internally. This should keep the noalias optimizations applied by compiler replacing memcpy, but keep the vector-based / branch-optimized version at runtime.

Ah, okay - I understand now.

On my machine, IIRC, the aligned ops did affect performance

Was this in relation to the results above? I think the benchmark could report throughput in cycles/byte rather than ns. This is something used in benchmarks like smhasher; hash functions do similar reading/branching opts and take a small amount of time for short sizes. Combine this with not clflushing dst/src and not cpu-core pinning, I'd assume ns-based time fluctuates more even with high iterations.

I think the benchmark has changed between when I tested (and the memcpy function has quite a bit too) - I'll re-check. Unless I'm missing something, I would be biased to keep the aligned strategy until/unless we diverge implementation based on whether a target has slow unaligned access or not, because I expect the impact on systems that don't care to be minimal (it costs one branch and one vector copy but this is only on large copies which are rare, and the effects of the increased code size), but systems that do have slower unaligned accesses will pay that cost proportional to the size of a long copy.

I can have the benchmark report cycles/byte, and do things like core-pinning and disabling frequency scaling next time I work on improving the benchmarking, but I think adding other benchmarks will probably take priority.

I only mention this as aligned loads didn't seem to have an effect on other benchmarks so trying to somehow discover/rationalize the difference in the results. TBF, also haven't tested on anything outside avx2, avx512f, and apple_m1.

One reason, could be the architecture - looking at Agner Fog's tables my machine (Zen 1) has worse latencies/throughputs on unaligned intstructions (at least for integer ones, can't remember at the moment if integer or float instructions were generated).

dweiller avatar May 10 '24 03:05 dweiller

Before merging it would be good to:

It looks like this is not ready for merging then? Can you close it and open a new one when it's ready for review & merging? Otherwise we can use the issue tracker to discuss ideas, plans, strategies, etc.

andrewrk avatar May 10 '24 20:05 andrewrk

Before merging it would be good to:

It looks like this is not ready for merging then? Can you close it and open a new one when it's ready for review & merging? Otherwise we can use the issue tracker to discuss ideas, plans, strategies, etc.

I've just done some cleanup of commits and assuming it passes CI, I'm happy for it to be merged. The unchecked boxes are all things that are potential future work and could be broken out into new issues.

I would say the real question is what level of effort you want to see put into benchmarking before merging. I can certainly do more on my own - write more synthetic benchmarks, do some proper statistics, check some real-world impacts more carefully (I did see a small benefit in the zig compiler using the c backend, but that was a while ago and I don't remember the details) - but I think any serious benchmarking effort would require help from others (in particular to check on more machines/architectures) and may be better done as part of gotta-go-fast.

dweiller avatar May 11 '24 05:05 dweiller

Looking at the table above again - would it be better to not include the optimisations done in this PR on ReleaseSmall? I'll double check the number are still accurate since the llvm-18 upgrade, but assuming they are is that too many extra bytes for ReleaseSmall?

dweiller avatar May 11 '24 08:05 dweiller

Looking at the table above again - would it be better to not include the optimisations done in this PR on ReleaseSmall? I'll double check the number are still accurate since the llvm-18 upgrade, but assuming they are is that too many extra bytes for ReleaseSmall?

The code size has changed minimally except for Debug, which now (presumably due to llvm-18) produces much more code (1745 bytes, as opposed to 143 bytes). In light of this I think it makes sense to revert to a simpler implementation like the original for debug and ReleaseSmall.

dweiller avatar May 30 '24 17:05 dweiller