Add multm_prev_ layer and enhance gemm() function for PLANE_WISE operations
This pull request introduces a new layer, multm_prev_, and enhances the gemm() function to support PLANE_WISE operations. These changes aim to improve the flexibility and performance of matrix multiplications in deep learning models, particularly for attention mechanisms.
New layer: multm_prev_
The multm_prev_ layer performs element-wise matrix multiplication between the current layer's input and the previous layer's output. This new layer is particularly useful for implementing attention mechanisms and other operations that require element-wise interactions between tensors.
Key features of multm_prev_:
- Supports PLANE_WISE matrix multiplication
- Preserves sample and channel dimensions
- Efficiently handles 4D tensors
Enhancement to gemm() function:
The gemm() function has been updated to support two modes of operation: CHANNEL_WISE (default) and PLANE_WISE. This modification allows for more efficient and flexible matrix multiplications, especially when dealing with 4D tensors.
Key changes to gemm():
- Added a new parameter
g_modeto specify the operation mode (0 for CHANNEL_WISE, 1 for PLANE_WISE) - Implemented PLANE_WISE mode, which performs matrix multiplication for each corresponding 2D plane across all samples and channels
- Updated documentation to reflect the new functionality and requirements for both modes
These changes provide greater flexibility in implementing complex neural network architectures, particularly those involving attention mechanisms or other operations requiring element-wise interactions between tensors.
A new test function, test_multm_prev(), has been added to verify the correct functionality of the multm_prev layer and the enhanced gemm() function.
Nice, I was just wondering if matmul_prev wouldn't be a better name.
We can change the name without any problem. I am already dealing with compilation issues likely due to static uses of mult_prev_ (in the template part), and we will decide on the name to keep afterward.
On the other hand, I was thinking of using the same convention for the transformation to be applied to softmax and thus have a special layer named softmaxm. So we would take mat_softmax or perhaps better msoftmax?
Would it be too difficult to have just an attention_ layer? I know that would mean doing the backpropagation by hand inside that layer, just like the loss_barlowtwins is doing it (but that one is just a bn_con).
It would be simpler to use for some people, but we would lose the flexibility to build attention in a potentially specific way (even though it currently follows fairly standard and structured steps). For instance, we can decide whether or not to mask, apply an additional filter for whether or not to remove a pad token before applying softmax, and so on. I was thinking more of providing, as you did for ResNet, an external definition file that gives a certain definition of the network...
Yes, we would lose flexibility, or maybe that layer could be initialized with a struct of options that control the behavior/features of the attention layer. But yes, it would still be less flexible.
It would be harder to implement something like flash attention without an explicit attention_ layer
Indeed, I can add a high-level declaration in the layer definition file, similar to what was done for the inception layer, like :
template <int embedding_dim, int nb_heads, typename SUBNET>
using attention_ = (...)
Sorry, I'm just catching up on these threads. Seems like this PR is still being worked on? There are conflicts with master in any case. Let me know when I should look it over :)
@davis, no you can do the merging. I think the conflicts with the master come from the fact that I created several branches from my own Dlib fork to be able to work on several layers in parallel. Any new layers currently being created are finished and can be integrated please. Technically, I still have a new layer to release but I'm going to wait until all the changes have been merged into the master branch to avoid any further conflicts... let me know if that's OK with you.
@davis, no you can do the merging.
You please merge them :)
I'll review them once there aren't merge conflicts.
Technically, I still have a new layer to release but I'm going to wait until all the changes have been merged into the master branch to avoid any further conflicts... let me know if that's OK with you.
Yeah that's fine :D
@Davis, could you please review this PR?
@Davis, I think I'm on the right track now (a lot of difficulty to find an enum shared by all the classes and accessible from the CPU and CUDA codes for example) and everything is working on my side now (including the dnn.cpp test). As indicated in the closed PR, to simplify the review, I've merged all the modifications (i.e. for the two new layers) via this single PR. However, when I run dtest, I get this error, which seems to have nothing to do with my new layers (it might have something to do with convolution). Does this sound familiar? Running test_dnn \
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!! TEST FAILED: test_dnn !!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Failure message from test:
Error occurred at line 1518. Error occurred in file C:\Users\Shadow\source\repos\multm-prev-layer\dlib\test\dnn.cpp. Failing expression was max(abs(mat(filter_gradient1)-mat(filter_gradient2))) < 1e-3. 0.00897217
Testing Finished Total number of individual testing statements executed: 955 Number of failed tests: 1 Number of passed tests: 0
@davis, I think I'm on the right track now (a lot of difficulty to find an enum shared by all the classes and accessible from the CPU and CUDA codes for example) and everything is working on my side now (including the dnn.cpp test). As indicated in the closed PR, to simplify the review, I've merged all the modifications (i.e. for the two new layers) via this single PR. However, when I run dtest, I get this error, which seems to have nothing to do with my new layers (it might have something to do with convolution). Does this sound familiar? Running test_dnn \
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!! TEST FAILED: test_dnn !!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Failure message from test:
Error occurred at line 1518. Error occurred in file C:\Users\Shadow\source\repos\multm-prev-layer\dlib\test\dnn.cpp. Failing expression was max(abs(mat(filter_gradient1)-mat(filter_gradient2))) < 1e-3. 0.00897217
Testing Finished Total number of individual testing statements executed: 955 Number of failed tests: 1 Number of passed tests: 0
@arrufat , would you please be able to pull this development branch on your side and perhaps verify what I noticed - not during compilation but during test program execution? There is indeed an error, at least on my validation platform, in a Conv1D operation which is obviously unrelated to the introduction of the new DNN layers that I am currently adding for formalizing the attention mechanism. Just to make sure you observe the same behavior. Thanks in advance
I've run the test on my GPU machine (dtest --runall) and they all pass.
I've run the test on my GPU machine (
dtest --runall) and they all pass.
OK, so maybe it's due to the GPU I'm using (an A4500); could it be possible to review again and merge all now? There are still other elements to work on ;-)
Nice this is great. Fix up the minor stuff I commented on and I'll merge it right away :D
Done.