jolt icon indicating copy to clipboard operation
jolt copied to clipboard

Parallelize compute_chunks_operands

Open sragss opened this issue 10 months ago • 8 comments

For a 64 core machine at a cycle count of ~16M, Jolt spends ~1.8% of its time in a segment called compute_chunks_opreands here.

This segment allocates and computes C different chunks for each instruction. For example for the EQ instruction we split the input operands X, Y into 4 8-bit chunks (WORD_SIZE / C). We then can compute EQ over each chunk individually.

Idea for acceleration: Split chunks_x and chunks_y into mutable slices. Iterate over each and compute the values in parallel writing the the slice indexes directly.

It may be helpful to review the tracing strategy for performance testing.

sragss avatar Apr 15 '24 18:04 sragss

Looks like the else branch of the loop can be removed as well: https://github.com/a16z/jolt/blob/main/jolt-core/src/jolt/vm/mod.rs#L533

sragss avatar Apr 16 '24 16:04 sragss

Hey @sragss I'm open to taking on this issue.

githubsands avatar Apr 16 '24 17:04 githubsands

Please! Feel free to ask questions here.

sragss avatar Apr 16 '24 17:04 sragss

@githubsands any update on this?

moodlezoup avatar May 08 '24 18:05 moodlezoup

Is this issue still open?

lognorman20 avatar Jun 25 '24 08:06 lognorman20

Hi @sragss, I'm having difficulty in reproducing the 1.8% (I'm only getting .09%). Right now I run

cargo run -p jolt-core --release -- trace --name sha2-chain --format chrome --pcs hyrax

and then getting stats in perfetto using SQL

select
name, dur, total_dur, (dur * 100. / total_dur) as dur_pct
from slices
cross join (
select dur as total_dur
from slices
where name='Example_E2E'
)
where name = 'compute_chunks_operands';

name | dur | total_dur | dur_pct
-- | -- | -- | --
compute_chunks_operands | 470260292 | 496995499417 | 0.09462063389942933

So this is only getting .09%. I'm running on an 8-core M1 with 242 cycle count. So I had a couple of questions:

  • What specific command did you use to generate your benchmarks?
  • Is there a better way to use perfetto for checking how much time is spent in each segment?

codercody avatar Jul 25 '24 16:07 codercody

242 cycle count is too small to get an idea of relative asymptotic performance. Can you try a bigger example – maybe in the 128k-512k range?

Also thought sha2-chain was around 3M cycles, did you modify it to fit in a smaller amount of RAM?

sragss avatar Jul 26 '24 00:07 sragss

Sorry, silly mistake. When I looked up how to find the cycle count, the thing it actually pointed me to was the battery cycle count 🤦 I didn't realize you were referring to trace length - 3,632,556 in my case. I ran it on the original sha2-chain.

But re: the questions above, what are the steps for reproducing your benchmarks? Or possibly do you have any ideas why I might be getting a much lower %?

codercody avatar Jul 26 '24 14:07 codercody

Hi , can i take on this issue ?

mahmudsudo avatar Oct 24 '24 00:10 mahmudsudo