feat: extend intrinsic `matmul`
This PR attempts to address #931
interface stdlib_matmul is created and extended to handle 3-5 matrices. (works for integer, real, complex)
API
A = stdlib_matmul(B, C, D, E, F)
A = stdlib_matmul(B, C, D, E)
A = stdlib_matmul(B, C, D)
The Algorithm for optimal parenthesization is as given in "Introduction to Algorithms" by Cormen, ed4, ch14, section-2.
numpy's linalg.multidot also uses the same algorithm.
Although as @jalvesz had suggested I should have used gemm for the multiplication of component matrices this uses matmul for now, I can make this if deemed appropriate once the major implementation has been given a green signal.
I have added a very basic example to play around with, and I will be adding the detailed docs, specs and tests once everybody approves of the major implementation.
I am not really happy with some parts of the code like computing the size of all the matrices individually, If anyone has any suggestions regarding that or any cleaner way of implementing some other stuff (perhaps some fypp magic) please do let me know
Notes
- I think another interesting enhancement would be, if the first array is 1-D, then treat it as a row vector. If the last error is 1-D treat it as a column vector just as numpy does it.
Here a few ideas for consideration:
- Regarding the use of gemm or plain matmult: both gfortran and intel (ifort/ifx) enable replacing the intrinsic matmul by gemm using flags: worth reading https://stackoverflow.com/questions/31494176/will-fortrans-matmul-make-use-of-mkl-if-i-include-the-library and https://community.intel.com/t5/Intel-Fortran-Compiler/qopt-matmul-with-mkl-sequential/td-p/1003110 so it might not be that harmful after all to implement using matmul.
- Regarding the signature of the key functions, you could consider:
pure function matmul_chain_order(p) result(s)
integer, intent(in) :: p(:)
integer :: s(1:size(p)-1, 2:size(p))
integer :: m(1:size(p), 1:size(p))
integer :: l, i, j, k, q, n
n = size(p)
...
end function matmul_chain_order
pure module function stdlib_matmul (m1, m2, m3, m4, m5) result(e)
real, intent(in) :: m1(:,:), m2(:,:)
real, intent(in), optional :: m3(:,:), m4(:,:), m5(:,:) !> from the 3rd matrix they can be all optional
real, allocatable :: e(:,:)
integer :: p(5), i, num_present
integer :: s(3,2:4)
p(1) = size(m1, 1)
p(2) = size(m2, 1)
num_present = 2
if(present(m3)) then
p(3) = size(m3, 1)
num_present = num_present + 1
end if
if(present(m4)) then
p(4) = size(m4, 1)
num_present = num_present + 1
end if
if(present(m5)) then
p(5) = size(m5, 2)
num_present = num_present + 1
end if
s = matmul_chain_order(p(1:num_present))
...
end function stdlib_matmul
For the procedure computing the orders you only need the p array, the dimension can be obtained. One might argue that this induces extra time. I think it will most likely be transparent.
For the main procedure, you could consider using optional arguments from the 3rd matrix onwards and then manage the rest within the same procedure without recursion.
Regarding the example, I would consider much more interesting to set an example a non-trivial example: the actual ordering being different to the naive sequence.
Just some food-for-thought.
Thank you very much @jalvesz for your detailed remarks, they are quite helpful.
I have refactored the algorithm to only accept p, as n can very well be calculated from that.
Regarding the signature I am a bit confused, because if we don't use recursion we would have to handle 14 cases (4th catalan number) if the number of matrices is 5, which would become quite messy quickly. For 3 matrices it is much efficient to compare the two ordering costs and dispatch an ordering appropriately (numpy claims that it is 15 more efficient than calculating a s matrix for it), 4 matrices case is also somewhat fine considering there are only 5 cases, But I am not sure about how to handle the 5 matrix one without handling all the cases explicitly.. (Or maybe we can just limit this to 4 matrices?) maybe I am missing something here.
And yes I will add some non-trivial examples for sure.
Looking forward to hearing your thoughts.
Indeed! another idea:
For 3 and 4 matrices make them internal procedures including passing as extra argument the ordering slice corresponding to those matrices. For the one public procedure with 2+3optionals, you compute just once the ordering and call the internal versions for 3 and 4 depending on the case.
Thank you @jalvesz, I have implemented them accordingly and added some better examples although I am not sure of how I may test the correctness of multiplication with large matrices or compare the time taken by native matmul vs this..
In terms of correctness I would say that the simplest approach would be to write down a couple of analytical cases for which the exact solution is fixed, also a couple of cases with random matrices for which the optimal ordering is different from the trivial one and compare: stdlib_matmul vs trivial sequential calls to intrinsic matmul. For integer arrays the error should be zero, for real/complex arrays the error tolerance should be somewhere around 100*epsilon(0._wp). The test could be done using the Frobenius norm error = mnorm(A-A_ref), where A would result from the current proposal and A_ref from the intrinsic matmul using the naive sequence.
In terms of performance, It might not be necessary to add that to the PR but, a separate small program showcasing two scenarios might be useful:
- Many multiple calls for
stdlib_matmulvs naive matmul for small matrices - Few multiple calls for
stdlib_matmulvs naive matmul for medium/large matrices
@loiseaujc maybe you have some use cases worth testing?
Oh boy ! I had not seen this PR. Super happy with it. I'll take some time this afternoon to look at the code and give you some feedback. As for the test cases, anything involving multiplications with a low-rank matrix already factored would do. As far as I'm concerned, these are the typical cases I encounter where such an interface would prove very useful syntactically. I'll see as I get to the lab if I have a sufficiently simple yet somewhat realistic set of data we could use for testing the performances.
I have replaced all the matmul's by calls to gemm... The code has become a bit complicated and verbose... If I have missed anything or there was a shorter way of doing the same, do let me know please.
And also now the interface handles only real and complex values.
@jalvesz @perazz
I thought CI failing on windows was just pure chance, but I don't think so now.. most probably has something to do with random numbers... I am not sure what exactly tho
@perazz I think the following error:
1827 | err0 = linalg_state_type(this, LINALG_VALUE_ERROR, 'matrices m4=',shape(m4),', m5=',shape(m5),' have incompatible sizes')
| 1
Error: Unterminated character constant beginning at (1)
is related with the line limit by default of 132 characters. This line has 139 characters. Maybe if you split it with a line continuation it should pass.
yes @perazz defering that to another PR seems a better option to me.
So, some final touches to this PR (I think) would be for me to add
- Some tests
- an example
- specs
@wassup05 please also note that the example program seems to be failing on Windows in the CI.
@jalvesz @perazz I have added the specs, example and some tests. I did not add any 5 argument test cases because as you can see the error tolerance should be increased quite a lot since the error is increasing as args are increasing... For 3 args it needed epsilon*300 and for 4 epsilon*1500 and even with that some tests have failed in the CI
@wassup05 could you try the following: instead of testing each individual value, test the L2 norm of the residual on the whole matrix operation:
instead of
call check(error, all(abs(r-r1) <= epsilon(0._${k}$) * 1500), "real, ${k}$, 4 args: error too large")
do
call check(error, norm2(r-r1) <= epsilon(0._${k}$) * factor , "real, ${k}$, 4 args: error too large")
per-value evaluations tend to be more sensitive, whole array norms should be a bit more robust. This does not remove the source problem which is the propagation of the error at each multiplication, but this is just the limit of the finite arithmetic operation at hand.
I tried that @jalvesz, I think the tolerance went even higher than the individual checking did, for 3 args it took around *800 and for 4 args it is *2500 and ongoing... Maybe I should reduce the size of the matrix
I have reduced the size and updated the tolerance accordingly, I think it should be fine now
In absence of further comments and activity, this can be merged soon imho.
Agreed @perazz, I also think this is ready!
just added some comments to make the example more understandable.