NeuralAttentionlib.jl
NeuralAttentionlib.jl copied to clipboard
Redesign Mask
The goal of the redesign is to support:
- better type hierarchy for dispatch. This helps the function that can only accept sequence masks to work with combined masks. https://github.com/chengchingwen/Transformers.jl/blob/91a3fe00bad5bb9ebff35b61356c3d52ad3efba3/src/loss.jl#L29-L31
- make it easier to use the mask in GPU kernel (for #23)
Codecov Report
Attention: Patch coverage is 77.10843%
with 19 lines
in your changes are missing coverage. Please review.
Project coverage is 74.52%. Comparing base (
40922f8
) to head (3af9063
). Report is 2 commits behind head on master.
:exclamation: Current head 3af9063 differs from pull request most recent head 297cc23
Please upload reports for the commit 297cc23 to get more accurate results.
Files | Patch % | Lines |
---|---|---|
src/mask/indexer.jl | 71.79% | 11 Missing :warning: |
src/mask/mask.jl | 66.66% | 7 Missing :warning: |
src/mask/broadcast.jl | 50.00% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #24 +/- ##
==========================================
+ Coverage 74.51% 74.52% +0.01%
==========================================
Files 30 30
Lines 2052 2057 +5
==========================================
+ Hits 1529 1533 +4
- Misses 523 524 +1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.