amazonka icon indicating copy to clipboard operation
amazonka copied to clipboard

Investigate compilation performance using large-records

Open brendanhay opened this issue 2 years ago • 2 comments

I've been meaning to try out large-records and collect some data regarding comparative compilation performance.

The large-records library itself has an unfortunate dependency on haskell-src-exts which I wouldn't want to have Amazonka proper depend on, but the crux of that blog post is why compiling amazonka-ec2 (prior to the type-per-module split) has the reputation it does - due to emitting gigabytes(!) of core.

It'd be nice to test/confirm this - if the gains are significant it might be worth further investigation into the possibility of trimming down large-records or emitting our own vector-backed records.

brendanhay avatar Nov 28 '21 09:11 brendanhay

I had a brief look at this. The biggest problem that I see is that large-records use their own Generic class, and while there's a lens module in large-records, it'd be a pretty grim user experience to force everyone downstream to write their own adapters from large-record lenses into their own lens library.

endgame avatar Dec 05 '21 23:12 endgame

A possible alternative may be generics-sop which does have its own Generic class, but anything with a GHC Generic instance automatically has one. If this PR ever gets merged, then the in memory representation will be the same as large-record (Vector (f Any)). Not sure if the work to make compilation fast for large-records compile fast has been done for that package though, and it may not actually meet the need to produce sane Core.

ghost avatar Dec 17 '21 03:12 ghost