libsrcnn
libsrcnn copied to clipboard
Less memory usage by I+II layer convolution instead of sequential I and II layer convolution.
Some issues before, years ago -
- https://github.com/rageworx/SRCNN_OpenCV_GCC/issues/2
- https://github.com/rageworx/SRCNN_OpenCV_GCC/issues/7
Issuer @zvezdochiot introduced his code with stb Less memory ( convolution with layer I and II at once ) but bad performance in openMP model ( about double time ). Take a look for less memory and keep performance in a way.
These codes were aged about 4 years. Need to understand code by myself again, may need more times.
Hi @rageworx .
See block algorithm. Allows you to process images of any size practically lossless. But due to block overlaps, performance is even lower. The size of the overlaps was chosen on the basis of dssim.
See also: https://github.com/shuwang127/SRCNN_Cpp/pull/4
See also: shuwang127/SRCNN_Cpp#4
Just simply this header, right ?
https://github.com/ImageProcessing-ElectronicPublications/stb-image-srcnn/blob/main/src/srcnn.h
Interesting, I will make performance check in low power consume system like aarch64 based debian linux systems.
See also: shuwang127/SRCNN_Cpp#4
And shuwang127 repo seems to abandonned. It looks better forget about asking pull request ...
@rageworx say:
Just simply this header, right ?
Stand! Afraid! And do you want to shove a defective bicubic interpolant?
@rageworx say:
And shuwang127 repo seems to abandonned.
This is the question of combining Layer I and Layer II.
Stand! Afraid! And ...
Is this a something kind of Russian slogan ? Actually I cannot sense your point. Anyway, your suggestion may help improve my old codes.
Regards, Raph.
Hi @rageworx .
See block algorithm. Allows you to process images of any size practically lossless. But due to block overlaps, performance is even lower. The size of the overlaps was chosen on the basis of dssim.
See also: https://github.com/shuwang127/SRCNN_Cpp/pull/4
Never heard about your announced algorithm, block? dssim ? But I will try!
@rageworx say:
Actually I cannot sense your point.
bicubic.h verified. See stb-image-resize and demo.
@rageworx say:
Never heard about your announced algorithm, block?
Simple division image into blocks with an overlap. With the processing of each block as a small image. At a time, one block is processed, this means that only one block needs to be allocated in memory. stb-image-srcnn say:
For complete processing, memory is required for 175 original images. With block processing, this size is reduced to 170 block size + 5 size of the original image.
@rageworx say:
dssim ?
Metrics: delta SSIM == 1/SSIM-1. Maybe use stb-image-nhwmetrics.
dssim -o butterfly.x2.dssim.2-0.png butterfly.x2.0.png butterfly.x2.2.png
0.00003022 butterfly.x2.2.png

stbnhwmetrics -q butterfly.x2.0.png butterfly.x2.2.png butterfly.x2.nhw-r.2-0.png
0.014613 butterfly.x2.2.png

Merged Co/nv I+II. And dissm looks like checking frequency differences by "Fast Fourier Transform/FFT" as above result, let be checked.
@rageworx say:
let be checked.
I already checked everything with metrics. There are only differences between the monolithic and the block algorithm at the "junction" of blocks. Now it is necessary to check not metrics, but memory allocation. Combining layers I and II greatly reduced memory consumption. But the monolithic algorithm eats decently anyway.