MatlabJuliaMatrixOperationsBenchmark
MatlabJuliaMatrixOperationsBenchmark copied to clipboard
V2.0: Julia 1.1.1, more accurate, Julia analyzer, MKL
Julia Main (JuliaMain-JuliaBench-JuliaBenchSIMD):
- Combined separated function files JuliaMatrixBenchmark and JuliaMatrixBenchmarkOp into JuliaBench and JuliaBenchSIMD respectively for easier access
- Used @benchmark instead of deprecated and inaccurate tic() and toq(), so all functions now just include the algorithm or operation. Benchmark result may not be comparable to ones in previous versions.
- @benchmarks calls allFunctions in local scope (using $). Then the minimum run time among different benchmarks for each function is chosen.
- Removed totalRunTime because it does not mean anything anymore
- Changed name of some variables and functions to more appropriate ones
- Used transpose() instead of deprecated .'
- Used DelimitedFiles.readdlm and DelimitedFiles.writedlm instead of deprecated readcsv and writescv
- Removed rounds because readdlim supports converting to int64 itself
- Changed deprecated squeeze to droplims
- jj for loop changed to be like fun in allFunctions. This allows simply adjusting allFunctions array if want to remove some functions from benchmark
- Added dims= to 2nd input argument of minimum(), maximum(), findmin() and sum() functions
- Added dummy mY to functions that don't use it. (for consistency in code logic)
- Changed deprecated expm with exp
- Changed deprecated sqrtm with sqrt
- eig changed with eigen.
- vClusterId modified to be compatible with Cartesian indexing
- mRunTime is stored in a mat file using MAT package.
- tRunTime is a table which has the run time information in a intuitive manner
- added 0 benchmark operation mode which is for fast testing only
- changed file and folder management
- now average of different iteration of Run Times is calculated (for different kk)
- automatic work directory setter
- In JuliaBenchSIMD, functions that are identical to non-SIMD versions are removed from allFunctions
- Two types of BLAS is used (MKL.jl and default Julia Blas)
- Debugging and performance trace of the code
Julia Analyzer (AnalyszeRunTimeResults-AnalysisJuliaPlotter):
- Julia Analyzer added for creating plots
Matlab Main (MatlabMain-MatlabBench):
- Use cRunTime cell and then tRuntime table to store run time data intuitively
- changed file and folder management
- use timeIt instead of tic and toc
- now the average of median of different iteration of Run Times is calculated (for different kk)
- some other changes to be the same as respective Julia function
Matlab Analyzer (AnalyszeRunTimeResults):
- plotting algorithm was improved
- use mat files instead of csv
- use of tables
- removed unnecessary AnalysisInitScript and the logic was added in visualization adjustment
- made Matlab AnalyszeRunTimeResults simpler
@RoyiAvital I added the default Julia benchmark, and I requested my pull again.
@RoyiAvital If you have trouble merging my pull request because it is very different than your master run these commands to replace my version with your master. (from this https://stackoverflow.com/questions/27449334/force-overwrite-on-master-from-a-pull-request) First take a backup:
git checkout RoyiBackup
Now replace
git checkout master
git fetch https://github.com/aminya/MatlabJuliaMatrixOperationsBenchmark
git checkout pr-branch
git push -f origin fork/pr-branch:master
@RoyiAvital do you have any problems that prevent you from merging?
Hi,
As I wrote, I will evaluate the PR once I have access to Julia with MKL.
Also, I wouldn't use timeit() on MATLAB as I wrote.
Anyhow, you have a great benchmark on your on. You can test it and it will be perfect.
As I wrote, I will evaluate the PR once I have access to Julia with MKL.
I have referenced your issue in my pull request to MKL.jl and the new issue I created.
Also, I wouldn't use
timeit()on MATLAB as I wrote.
I can contact MATLAB's support to get their recommendation, otherwise, I can write a custom function to replace timeit().
Anyhow, you have a great benchmark on your on. You can test it and it will be perfect.
Well, the point of me updating this repository was to replace the misleading old figures. I am developing some parallel projects to provide a Julia package that provides native Julia functions replicating MATLAB's functions (such as for image processing). And, I plan to add the benchmark of those to this repository. If you think, you can not evaluate all the upcoming benchmarks, please give me write access, or I should think about deforking my repository.
I don't find my figures misleading. They were accurate to the data I received on my machine and they can be reproduced given the system configuration mentioned.
I don't want to use MKL.jl. I am waiting (It might not happen ever) for Julia with MKL out of the box.
Regarding timeit() no need to write your own function.
On my benchmark I'd like to stay with my own measurement method of timing each iteration and having array of timings to work on (Minimum, Maximum, Mean and Median).
The way I see it, it is great you forked my work and you can take it from there to the path you find appropriate. Keep doing it. I'm interested to see the effect of integrating MKL into Julia. I hope you'll be able to use more advanced flags of MKL as well later on.
I don't find my figures misleading. They were accurate to the data I received on my machine and they can be reproduced given the system configuration mentioned.
Yes, I am not saying the data are manipulated. Sorry if I meant that. But the figures are misleading.
First, you should use loglog plots when you are comparing two curves that are very near to each other because now the difference when matrix size is less than 1000 is not visible because the x distance between matrix size is taking all the space. (see AnalyszeRunTimeResults.m ). This can be fixed easily just by replacing loglog with plot
Second, for Julia+SIMD, only 3 functions made use of @simd and multi-threading in Julia. However, in your figures, you have included Julia+SIMD for all the functions, and they have a different result than Julia even if they have not used @simd ! This shows that tiq toc method you used for benchmarking in Julia is not accurate at all and has a lot of noise even for the same function.
Regarding
timeit()no need to write your own function. On my benchmark I'd like to stay with my own measurement method of timing each iteration and having array of timings to work on (Minimum, Maximum, Mean and Median).
Well, this is not a severe problem. It can be replaced easily. However, I calculated the mean of multiple iterations of running timeit.
The way I see it, it is great you forked my work and you can take it from there to the path you find appropriate. Keep doing it.
I will create another repository to ease my workflow (starting with my forked one). However, I will keep this pull request so you can merge it later.