JUCE icon indicating copy to clipboard operation
JUCE copied to clipboard

Replaced LUT in LadderFilter with fast tanh to reduce aliasing

Open beschulz opened this issue 5 years ago • 0 comments

when drive is around 1.0. This offers greatly improved audio quality at comparable performance (only 5% slower in my benchmark).

@julianstorer: The LadderFilter is using a LookupTableTransform called saturationLUT that stores std::tanh from -5 to 5 in 128 values. The LookupTableTransform transform is doing linear interpolation. This is all nice and well - unfortunately, the lookup table produces noticeable aliasing when drive is set around 1-1.5. It's very noticeable, when you feed a 80 Hz sine with amplitude 1 through a LadderFilter with: cutoff = 22000, type=LP12, resonance=0, drive=1. (as close to neutral as one can expect considering the saturation.)

The effect is reduced considerable when the size of the LUT is increased to 512.

But it also improves if a fast tanh approximation is used instead of the LUT.

struct Benchmark : private dsp::LadderFilter<float>
{
    Benchmark()
    {
        // I inherited from dsp::LadderFilter to get access to processSample
        const auto sampleRate = 48000;
        std::vector<float> samples(sampleRate * 5);
        for(auto& f : samples) f = rand() / float(RAND_MAX);

        prepare({sampleRate, /*unused*/0, 1});
        setEnabled(true);
        setCutoffFrequencyHz(22000.0f);
        setMode(dsp::LadderFilter<float>::Mode::LPF12);
        setResonance(0.0f);
        setDrive(1.0f);


        for(int i=0; i!=100; ++i)
        {
            const int64 start = Time::getHighResolutionTicks();

            float acc = 0.0f;
            for(auto& f : samples)
            {
                updateSmoothers();
                acc += processSample(f, 0);
            }

            const auto duration = (Time::getHighResolutionTicks() - start) / double(Time::getHighResolutionTicksPerSecond());
            std::clog << "acc: " << acc << " duration: " << (duration*1000) << " ms" << std::endl;
        }
    }
} benchmark;

And before you ask: Yes, I run the Release build.

Here are the results (avg of 100 runs):

std::tanh: 9.2915437 ms (+39%) vox_fasttanh2: 7.0053885 ms (+5%) LookupTableTransform: 6.6654806 ms FastMathApproximations::tanh: 6.6130014 ms (-1%)

CPU: 4 GHz Intel Core i7

Here's some python code that compares the quality of the approximations: https://gist.github.com/beschulz/12884eb38ece23896e038c272b4c7a80

The voxengo approximation does not go crazy as fast as the JUCE one does for higher values.

beschulz avatar Aug 08 '19 10:08 beschulz