StaticKernels.jl icon indicating copy to clipboard operation
StaticKernels.jl copied to clipboard

Add an Option to Broadcast Along the 3rd Dimension

Open RoyiAvital opened this issue 2 years ago • 7 comments

In many cases one want to apply the kernel to a multi channels matrix (RGB Image). Is there a way to do so with a single kernel to work on each channel?

RoyiAvital avatar Nov 29 '22 07:11 RoyiAvital

Are you searching for mapslices?

k = @kernel w -> w[1, 0] - w[0, 1]
img = rand(3, 10, 10)
mapslices(x -> map(k, x), img; dims = (2, 3))

It might not be optimal on performance however.

stev47 avatar Dec 01 '22 13:12 stev47

Well, currently what I do is looping over the 3rd dimension of a pre allocated area. I'm after a non allocating solution beside the explicit loop.

RoyiAvital avatar Dec 01 '22 17:12 RoyiAvital

You could just write a three-dimensional kernel (expanding from my previous example):

k = @kernel w -> w[0, 1, 0] - w[0, 0, 1]

There is currently no way to "lift" a kernel to a higher dimension. I'm open to contributions however.

stev47 avatar Dec 02 '22 13:12 stev47

I mean I'm doing something like:

for ii in 1:3
  @views map!(mK, mO[:, :, ii], extend(mI[:, :, ii], StaticKernels.ExtensionReplicate()));
end

It still allocates few KB's, but works good. By the way, any way to completely avoid any allocation?

RoyiAvital avatar Dec 02 '22 13:12 RoyiAvital

You can rewrite your kernel mK according to my previous example. Also I cannot reproduce your example: the following does not allocate for me on Julia 1.8.3

using BenchmarkTools
using StaticKernels

f!(mO, mK, mI) = begin
    for ii in 1:3
        @views map!(mK, mO[:, :, ii], extend(mI[:, :, ii], StaticKernels.ExtensionReplicate()));
    end
end

mK = @kernel w -> w[1, 1] - w[0, 0]
mI = rand(10, 10, 3)
mO = similar(mI)

@btime f!($mO, $mK, $mI)

stev47 avatar Dec 06 '22 19:12 stev47

Indeed, It seems to be a difference since I didn't wrap it as a function but only as a local block using begin.

I don't understand what you mean in your previous example. If you mean k = @kernel w -> w[0, 1, 0] - w[0, 0, 1], then if I get it right, it is not what I'm after. I am after mK = @kernel w -> w[1, 1] - w[0, 0] per each matrix / channel on itself.

In my point of view, the kernel as defined by mK = @kernel w -> w[1, 1] - w[0, 0] is a 2D operator, so, by the broadcasting rules, if I apply it on a 3D tensor I expect the broadcasting to basically apply the 2D operator per channel in our image which is represented by 3 channels (3 Matrices stacked on the 3rd dimension).

RoyiAvital avatar Dec 07 '22 17:12 RoyiAvital

I understand what you want to do, but you did not apply my example to your setting: If you have the remaining channel as last component it reads: k = @kernel w -> w[1, 1, 0] - w[0, 0, 0], i.e. it is a kernel which has size 1 in the third channel, therefore acting like normal broadcast/map on that channel.

stev47 avatar Dec 08 '22 10:12 stev47