proto3-wire icon indicating copy to clipboard operation
proto3-wire copied to clipboard

Consider performance improvements

Open ndmitchell opened this issue 5 years ago • 10 comments

We at Digital Asset took your work and made a bunch of performance improvements to the Decode module. The code is at https://github.com/digital-asset/daml/blob/master/nix/third-party/proto3-wire/src/Proto3/Wire/Decode.hs and the approximate list of changes I've been able to find are:

  • Switch to the Applicative instance for proto3-wire.
  • Use List instead of Map for RawMessage decoding
  • More INLINE instances.
  • Remove cereal dependency, do things more directly
  • Leave RawMessage a HashMap instead of converting
  • Get rid of Seq

Are you interested in taking changes that optimise the performance?

ndmitchell avatar Apr 15 '19 14:04 ndmitchell

@ndmitchell yes!

ixmatus avatar Apr 15 '19 14:04 ixmatus

Absolutely!

Note that I think we have some divergence because we've made performance improvements to Decode as well in the meantime, some of which are the same, some of which are different. (E.g. Seq is gone too, but also we have a manually unrolled varint encoding, etc.)

C.f. https://github.com/awakesecurity/proto3-wire/pull/46 for the changes.

So it would require thinking a bit to figure out which changes in your branch can port over and make a difference in the code as it currently stands.

gbaz avatar Apr 15 '19 14:04 gbaz

Did you write any benchmarks? Our benchmarks were in the context of our larger project, so probably don't reproduce easily.

ndmitchell avatar Apr 15 '19 14:04 ndmitchell

Same for us, unfortunately.

gbaz avatar Apr 15 '19 16:04 gbaz

FWIW I ran the encoding benchmark I had and your version is 20% faster than ours. Yay! If you can't see anything in our patch set that looks valuable I'll close this ticket.

I didn't really measure decoding performance. That used to be the bottleneck for us, and where we spent most energy optimising. Now it's a code path we don't use that often.

ndmitchell avatar Apr 16 '19 20:04 ndmitchell

Good to know that all that performance work paid off! (It was rather exhausting). Both encoding and decoding are still pretty costly to us given the volume of data we work with, but we didn't have any great ideas for further improvements. If your team has any more bright ideas, please chime in!

As for the current patch set, the one thing we didn't really do or consider at all was the INLINE annotations. If you have benches that show that they bring improvement on the current codebase, that would be great.

gbaz avatar Apr 16 '19 20:04 gbaz

At some point in time in the past the INLINE pragmas helped. I can't say much about now.

I also note we switched to unchecked shifts. Not sure if GHC automatically optimises around that?

We also used HashMap rather than IntMap. Although that may have been because the rest of our stuff was using that.

There is also a BS.copy in the bytes that I was surprised to see surviving optimisation work.

ndmitchell avatar Apr 16 '19 21:04 ndmitchell

Maybe @j6carey can speak to the copy. I think we want it for our purposes to reduce overall retention.

Also not sure if the unchecked shifts are an improvement or not -- we might get equivalent results by simply having specialized to a single dictionary. Both are certainly worth considering...

gbaz avatar Apr 17 '19 14:04 gbaz

@gbaz , @ndmitchell , do you mean the copy during decoding? I have not studied decoding very much.

I think checked shifts whose shift distance is a compile-time constant get optimized to the same thing as unchecked shifts.

j6carey avatar Apr 17 '19 17:04 j6carey

@j6carey yes, the copy during decoding. We omitted that in our fork.

ndmitchell avatar Apr 17 '19 18:04 ndmitchell