TinyAD icon indicating copy to clipboard operation
TinyAD copied to clipboard

Bug in the implementation of pow

Open the13fools opened this issue 1 year ago • 2 comments
trafficstars

I'm not sure what the preferred way is to contribute to this library, but I'd like to suggest that the implementation for autodiffing pow be updated to the following:

    friend Scalar pow(
            const Scalar& a,
            const int& e)
    {
        TINYAD_CHECK_FINITE_IF_ENABLED_AD(a);
        if constexpr (TINYAD_ENABLE_OPERATOR_LOGGING) TINYAD_DEBUG_VAR(__FUNCTION__);

        if ( e == 0 )
        {
            return chain((PassiveT)1.0,
                        (PassiveT)0.0,
                        (PassiveT)0.0,
                        a);
        }
        else if ( e == 1 )
        {
            return chain( a.val,
                        (PassiveT)1.0,
                        (PassiveT)0.0,
                        a);
        }
        else 
        {
            const PassiveT f2 = std::pow(a.val, e - 2);
            const PassiveT f1 = f2 * a.val;
            const PassiveT f = f1 * a.val;

            return chain(
                        f,
                        e * f1,
                        e * (e - 1) * f2,
                        a);

        }


    }

the13fools avatar Jan 02 '24 21:01 the13fools

Hi Josh,

Thanks for the suggestion! Do you have an example where the original implementation fails? Or is this a performance optimization?

I think in the case e == 1 it should be chain(a.val, (PassiveT)1.0, (PassiveT)1.0, a). Let me know if you agree.

The best way to contribute is by creating a fork followed by a pull request. This way your contributions can be in your name.

Best Patrick

patr-schm avatar Jan 03 '24 09:01 patr-schm

I get nans for at least one of these cases with the default implementation. For e==1 if the gradient is constant the hessian should be 0, no?

I created a fork with the code above in the following commit: https://github.com/the13fools/TinyAD/commit/8d48534bb844afaa2adb44404c45df713e056bc6

This is the offending code in my codebase: https://github.com/the13fools/gpgpt/blob/main/tools/07_mint3d_v1/src/mint3D_hook/ElementVarsTinyAD.h#L425

Maybe you'll find the way I set it up interesting. Basically I created a "TinyADHook", and then for the per element lambda I created a class that gets instantiated each time to save on boilerplate code. If you have performance suggestions that would be great as TinyAD is quite slow for my use case and it would be nice to speed it up some.

By the way, another question in this vein, is it possible to mix in your own gradients/hessians in some convenient way? I think that would be a really helpful feature to add if not, as it would at least offer a convenient way for me to manually speed up this code.

Another thing that I would love would be the ability to mark dofs as constant and then have the framework generate selection matrices automatically. Right now I'm just setting elements as constant, but this somehow this makes the system harder to solve than it needs to be.

the13fools avatar Jan 03 '24 17:01 the13fools