MLStyle.jl
MLStyle.jl copied to clipboard
about benchmark
I've already made a comparison between MacroTools.jl and MLStyle.jl, and the latter seems to be 3 times faster in dev branch with 1/5 space cost and 5 times faster in pattern-to-inline-function branch with 1/5 space cost(a bit less than dev).
The test snippet(check "benchmark.jl" in the root directory) I used comes from the README document of MacroTools.jl, and the gap could be much larger when the case becomes more complex.
I'm planning to make a thorough benchmark experiment to give a rational evaluation about the performance gain of MLStyle.jl.
Maybe put some benchmarks as plots in the documents would be nice?
clever, that'll look like big news
@Roger-luo How about current stage? I mean the kinda aggressive plots in README...
One thing I'd be curious to see would be how it compares to languages with built-in support for pattern matching on a pattern match heavy workload. Taking something like Jon Harrop's benchmark here: http://flyingfrogblog.blogspot.com/2017/12/does-reference-counting-really-use-less_26.html
That would give a good order-of-magnitude estimate. That benchmark is also technically a benchmark of GC performance (with no cheating using pools for non GCed languages, unlike the shootout). But GC overhead for Julia when tracing a big tree should be less than refcounting overhead for Swift, if Julia's GC is reasonably well implemented.
@saolof Thanks for your comments! I'm on your side but recently I won't make benchmark comparisons with other languages for it's extremely time consuming.
There're so many items I have to handle with, like documentations, tutorials and new features, etc, where benchmark comparisons are in lower priority.
Move to v0.3 for the array performance issues.