ncnn
ncnn copied to clipboard
Size layer
this is Size layer, equivalent to size operator in onnx
Codecov Report
:x: Patch coverage is 90.00000% with 1 line in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 89.47%. Comparing base (fdf2c48) to head (37bcae6).
:warning: Report is 513 commits behind head on master.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| src/layer/size.cpp | 90.00% | 1 Missing :warning: |
:exclamation: There is a different number of reports uploaded between BASE (fdf2c48) and HEAD (37bcae6). Click for more details.
HEAD has 28 uploads less than BASE
Flag BASE (fdf2c48) HEAD (37bcae6) 30 2
Additional details and impacted files
@@ Coverage Diff @@
## master #5021 +/- ##
===========================================
- Coverage 94.72% 89.47% -5.26%
===========================================
Files 765 307 -458
Lines 229654 89796 -139858
===========================================
- Hits 217551 80344 -137207
+ Misses 12103 9452 -2651
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
tools/onnx/onnx2ncnn.cpp.orig accidently committed ?
done. should I add it to list of ops?
done. should I add it to list of ops?
yeah, add to operators doc
done as well.
thanks. I forgot to add the additional dimension to the test when printing.
how would I assign it as an integer? the types are all ints, and multiplications will return an integer. ncnn::Mat has an operator t* which can accept ints as well!. isn't it? I assume as well that we should have integers instead of floats because size always returns an integer scalar. your comment also brought to my attention that we need to consider having integer as the output size.
@nihui what about doing memcpy on output tensor rather than assigning it to its first element? does it make it better?
@nihui what about doing memcpy on output tensor rather than assigning it to its first element? does it make it better?
// access by channel / depth
int* p = mat.channel(i);
int* p = mat.depth(i);
// access by row
int* p = mat.row<int>(i);
// access by element
int* p = mat;
// set value at 1,2,3,4 to 567
mat.channel(1).depth(2).row<int>(3)[4] = 567;