halo2 icon indicating copy to clipboard operation
halo2 copied to clipboard

memo: improvement list

Open ashWhiteHat opened this issue 3 years ago • 14 comments

I noted improvement list in order not to forget.

  • [x] Refactoring
    • [x] Migrate ff to assembly
  • [ ] Hardcode
    • [ ] Fft twiddles factor
    • [ ] Fft Omega and Omega inv
  • [x] Reuse
    • [x] Fft twiddles factor
    • [x] Bitreverse order

Thank you Onur for giving ideas!

ashWhiteHat avatar Dec 03 '21 06:12 ashWhiteHat

Can we move this to halo2 repo ?

barryWhiteHat avatar Dec 03 '21 10:12 barryWhiteHat

Yes!

ashWhiteHat avatar Dec 03 '21 11:12 ashWhiteHat

@NoCtrlZ i posted some cpu profiling graph here. https://github.com/zcash/halo2/issues/418#issuecomment-989568648. It seems Domain::rotate_extended costs lots of time. Maybe another big issue we need to solve later.

lispc avatar Dec 09 '21 07:12 lispc

@lispc Thank you for nice feed back! I added it to description.

ashWhiteHat avatar Dec 10 '21 03:12 ashWhiteHat

Some of my initial ideas after looking at the code a bit and things I plan on looking into further and implementing. Will have to implement some of these things to know how big of different it makes to know if they are worth it or not.

Most time spent in evaluation the expressions, FFT, and multiexp. Shameless plug to my post about my libsnark optimizations but many of what is said there should also be applicable here.

FFT

  • Make a recursive version like https://github.com/Loopring/libfqfft/commit/1d8db42efacf764dc7acacdcc53030115259fc26
  • Often an FFT is done and then an extra factor/operation is applied. It's better to
    • apply the additional computation within the FFT for cache reasons (otherwise have to load everything into cache again)
    • precompute with the twiddles/... where possible
    • Often the FFT is done where a lot (like 95%) of the last values are 0. This could probably be exploited for some moderate gains.

Multi exp

  • Make it work more like https://github.com/Loopring/libff/commit/5e87e58d6636125052030ad8a2fb93677e327bcd
  • Experiment with c value, I previously found 16 to work great for some reason but of course may be circuit dependent
  • Add prefetching, which should make a nice difference with little work
  • Experiment with density maps, which should help when not working on cosets
  • Add density maps when not doing the multi exp over the cosets.

Evaluations

  • Probably the part that will need the most experimentation because these big polynomials are pretty unique to zkEVM I think.
  • Current approach is to parallelize on the math operation inside the expression, but I think this may not be the best approach on the CPU.
    • I don't think it's cache efficient and needs larger temporary buffers.
    • Does not allow reusing intermediate results efficiently
    • Doesn't directly allow efficient early out scenarios (which I think in the zkEVM may be a big downside with how the circuits work)
  • With how the zkEVM expressions lots of parts are not used when a specific opcode is run, I think this may open op skipping over parts of the expression in most cases. A naive but general way to exploit this could be to e.g. change a bit how multiplication works. First do the cheapest term, and only when that term isn't zero do the more expensive term. Of course if this ends up working much better schemes can be found (e.g. prover checks which opcode is being done and evaluates only the necessary parts efficiently). Also only every 1 row in STEP_HEIGHT is actually used to constrain values (I assume a lot of these expressions will be 0 but not sure about that) so we may be able to skip a majority of the rows if that's the case. This works for the normal evalutions but not the ones in the extended domain over cosets (which is the majority of the prover time).
  • Most time spent on operations over the same extended domain, may or may not be always necessary to extend the domain that much (already an issue open for this so will not focus on this).

Will have to test against actual transactions to see if some of these assumptions hold or not, currently most values are 0 because the bench circuit isn't filled with actual transactions so don't want to base too much on the current bench.

Current status: Biggest gains are from reusing intermediate results across expressions. This also impacts memory use a lot because of temporary buffers so optimizing both at the same time. Very easy to reduce memory by 50%-60% with practically no changes (without counting the memory savings by not using rotate_extended for the evaluations which is also potentially very high). Optimizations across expressions up to 5x faster than just hardcoding the expressions (which was already 3x faster). Currently generating the h(x) poly largely on the fly, fast and memory efficient! (but there is even more room when when trading in some performance for even further memory use reduction. How big this impact I don't know yet.

Some stuff I quickly tested a bit to have more of a feeling how this behaves:

Misc

  • We don't need any zero knowledge. For groth16 that meant some significant work wasn't necessary to compute in the prover, not sure if the same is true in PLONK.
  • A lot of memory allocs are being done in the prover, better to reuse buffer where possible
  • Because there are many FFTs/multi exps it may be interesting to see what the best parallellization strategy is (inside an FFT/multiexp or just on the FFT/multiexp level) but I think that may only really impact smaller circuits because otherwise there should be enough work per thread so shouldn't matter that much.
  • Faster field math always better of course (but already being done). :)

Brechtpd avatar Dec 17 '21 22:12 Brechtpd

@Brechtpd Thank you! Let me check and I am going to work on that 😎

ashWhiteHat avatar Dec 17 '21 23:12 ashWhiteHat

Two important areas to optimize on the circuits side are

  1. The max degree, which greatly impacts the prover performance because a lot of calculations need to be done in the extended domain. How big the extended domain is depends on the max degree (currently the extended domain is x16 bigger).
  2. The number of lookups, which are heavier on the prover than the custom gates.

We currently have a good idea how to greatly reduce 2) by making use of the intermediate rows (the rows between the step height) by having each row do a limited number of lookups and in the custom gates use those cells to verify the data we would normally lookup.

Because lookups add 3 degrees to the input expression, having this indirection with a lookup always having a simple table cell as input also means we have a lower degree requirement (because the input degree is always 1).

For the custom gates it should always be possible to get the degree as low as needed by making use of intermediate cells (which I guess could even be done automatically), so the theoretical minimum degree we can achieve is strictly the one imposed by the lookups.

The current minimum degree for lookups (between advice cells) is 4. This would put the lower limit on the extended domain to 4x the normal size. However I think it may be possible to lower this to 3 by removing the extra terms required to have zero knowledge (which for zkEVM we don't need anyway):

  • https://zcash.github.io/halo2/design/proving-system/lookup.html#technique-description
  • https://github.com/appliedzkp/halo2/blob/a970e5752ed67929b27aaaab894c00e3365cfd72/src/plonk/lookup.rs#L25-L71

Without zero knowledge I believe we can drop the (l_last + l_blind) term which lowers the required degree by 1. Can anyone help confirm if this is correct or not? I would think this wouldn't have any impact on anything else but I don't know much about the math so don't know for sure.

If this is true the theoretical lowest degree needed is just 3 and so the extended domain would only be 2x the size of the normal domain, otherwise it's x4 which is still pretty good I guess. :) Adding a selector to the lookups also increases the degree, so if we need the degree to be at 3 we would need dedicated cells that do the lookups on each row without exception (which shouldn't be a problem).

Brechtpd avatar Jan 14 '22 21:01 Brechtpd

Without zero knowledge I believe we can drop the (l_last + l_blind) term which lowers the required degree by 1. Can anyone help confirm if this is correct or not?

I think it won't break anything, but would like to hear @therealyingtong's thought.

Adding a selector to the lookups also increases the degree, so if we need the degree to be at 3 we would need dedicated cells that do the lookups on each row without exception (which shouldn't be a problem).

Ideally we don't need to always shuffle the input and table in the same grand product IIUC, we can shuffle input and table separately with their own grand products to take down the degree to 2. If we really need selector on input to make layout easier, then I think this is a possible solution.

han0110 avatar Jan 15 '22 06:01 han0110

upstream halo2 introduced AST for ploy and performance get improved. (details : https://github.com/zcash/halo2/pull/447#issuecomment-1016947456) We may need to merge the upstream later.

lispc avatar Jan 20 '22 03:01 lispc

Introducing assembly was done on https://github.com/appliedzkp/pairing/pull/5. It improved prove and setup slightly, and CPU dramatically as following. https://github.com/appliedzkp/pairing/pull/5#issue-1109232627

ashWhiteHat avatar Jan 25 '22 12:01 ashWhiteHat

Assembly Optimization In The Future

goff implementation will be hint.

ashWhiteHat avatar Jan 26 '22 07:01 ashWhiteHat

upstream halo2 introduced AST for ploy and performance get improved. (details : https://github.com/zcash/halo2/pull/447#issuecomment-1016947456) We may need to merge the upstream later.

Rebase and introduce assembly reduce prover time 72.02%. https://github.com/appliedzkp/zkevm-circuits/pull/302#issue-830046561

ashWhiteHat avatar Feb 08 '22 06:02 ashWhiteHat

Optimize poly evaluation and fft reduce prover time 75.6%. https://github.com/appliedzkp/zkevm-circuits/pull/396#issuecomment-1096423824

ashWhiteHat avatar Apr 12 '22 09:04 ashWhiteHat

Category

  • [ ] fft
    • [ ] radix four
  • [ ] poly arithmetic
    • [ ] use join utility
  • [ ] evaluation
  • [ ] multi exponentiation
    • [ ] pipenger and wnaf
  • [ ] omit montgomery reduction
  • [ ] pairing
    • [ ] affine mul

Memo

  • [ ] utilize chunk number with task size and thread number

ashWhiteHat avatar Apr 25 '22 10:04 ashWhiteHat

Most of these improvements now apply to primitives that are implemented in halo2curves so the issue is no longer relevant.

davidnevadoc avatar Jun 11 '24 15:06 davidnevadoc