DSP.jl
                                
                                 DSP.jl copied to clipboard
                                
                                    DSP.jl copied to clipboard
                            
                            
                            
                        Make conv return floats when given integer arrays
Ref #410, suggestion from the comments.
This removes an unnecessary round. Dispatch for this case is now handled the same as for arrays of generic Number type, so the return type of conv([1, 2, 3], [4, 5, 6]) is now Float64 instead of Int, correctly reflecting the internal precision. Users can get the old behavior by calling round themselves, i.e. round.(Int, conv(a, b)) or perhaps even round.(promote_type(eltype(a), eltype(b)), conv(a, b)) for other integer types.
If DSP learns a method of integer convolution in the future, the return type for that method should be an integer.
I'm hoping for comments on this PR, I think this is worth discussing before merging.
On Sun, Mar 14, 2021, at 19:32, Ronan Pigott wrote:
Ref #410 https://github.com/JuliaDSP/DSP.jl/issues/410, suggestion from the comments.
This removes an unnecessary round. Dispatch for this case is now handled the same as for arrays of generic Number type, so the return type of conv([1, 2, 3], [4, 5, 6]) is now Float64 instead of Int, correctly reflecting the internal precision. Users can get the old behavior by calling round themselves, i.e.
round.(Int, conv(a, b))or perhaps evenround.(promote_type(eltype(a), eltype(b)), conv(a, b))for other integer types.
If DSP learns a method of integer convolution in the future, the return type for that method should be an integer.
You can view, comment on, or merge this pull request online at:
https://github.com/JuliaDSP/DSP.jl/pull/411
Commit Summary
- Make conv return floats when given integer arrays File Changes
- M src/dspbase.jl https://github.com/JuliaDSP/DSP.jl/pull/411/files#diff-5abb42008ccc969a9672dee0fc50fc9a3ad75f93d843e83a1535993107d6e387 (3) Patch Links:
- https://github.com/JuliaDSP/DSP.jl/pull/411.patch
- https://github.com/JuliaDSP/DSP.jl/pull/411.diff
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/JuliaDSP/DSP.jl/pull/411, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUDZT62OEHPGGADMKGPVA3TDVBRLANCNFSM4ZFPYFTQ.
Any comments?
I know that convolution is used in non-DSP contexts for other purposes (like multiplying huge numbers) and in those contexts true integer convolution makes sense, and should probably be implemented without doing FFTs.
That said, this is a DSP package and if I remember correctly, supporting the integer case has added complexity in the past. I think it's reasonable to say that convolution returns floats.
Well I'm in favor of making this change for now until we have real integer convolution. Could you please fix the tests?
I'd like to have a true integer convolution, but while we don't have it, being honest and returning floats seems better than rounding. So generally in favor, unless someone surprises me by submitting a proper integer-capable convolution.
I have one that is basically complete, though I've stalled out on how to select which algorithm to use. I've added a bunch of direct convolution algorithms that perform differently for different inputs, and have also added a bunch of functions that try to estimate the run time based on some simple fits that I did to a battery of benchmarks.
I would love some help brain-storming how to structure the code that picks the best algorithm for a given input size and type. The code is on this branch: https://github.com/galenlynch/DSP.jl/blob/merge_conv_filt/src/dspbase.jl
I can make a draft PR soon and discuss where I've stalled out.
EDIT: In the mean time in that branch I added:
- multithreaded convolution for overlap-save
- direct convolution (+multithreading) for ND convolution
- direct convolution (+multithreading) for ND separable convolution
- direct convolution (+multithreading) for vector convolution
I'd like to have a true integer convolution, but while we don't have it, being honest and returning floats seems better than rounding. So generally in favor, unless someone surprises me by submitting a proper integer-capable convolution.
I agree with @martinholters. Integer convolutions have their niche, but if there's no proper true integer convolution, it's better to be fair and return a float.
Hers is a simple N-D time-domain convolution I've cooked up:
function tdconv(u::AbstractArray{<:Number, N}, v::AbstractArray{<:Number, N}) where {N}
    return [
        sum(u[m] * v[n-m]
            # valid ranges (u and v assumed zero outside):
            #    firstindex(u) <= m <= lastindex(u)
            # and
            #    firstindex(v) <= n-m <= lastindex(v)
            # or equivalently
            #    n-lastindex(v) <= m <= n-firstindex(v)
            for m in CartesianIndices(ntuple(d ->
                max(firstindex(u,d),n[d]-lastindex(v,d)):min(lastindex(u,d), n[d]-firstindex(v,d)),
                N
            ))
        )
        for n in CartesianIndices(ntuple(d -> 2:size(u,d)+size(v,d), N))
    ]
end
I hope to find some time to hook it up so that conv  dispatches to it for integer input and probably also for small-sized float inputs, but I need to do some benchmarking to figure out what a reasonable cut-off would be.
Should be obsoleted by #545. Close?
Should be obsoleted by #545. Close?
Should be auto-closed when #545 is merged, but if you prefer, you can also close, and I think we're pretty sure we'll go with #545 once that is fully in shape.
#545 looks good! Thanks for working on this!