ad icon indicating copy to clipboard operation
ad copied to clipboard

Automatic Differentiation

Results 17 ad issues
Sort by recently updated
recently updated
newest added

Many times, I can embed `Double` constants into functions to be derived by `ad` by just using `realToFrac`. As an example, I use `realToFrac` on the example using `auto`: ```...

I am using AD for gradient-based optimization and need better performance than I am currently getting. I noticed that some work has gone into improving the `.Double` specializations recently, so...

Does this package support matrix algebra operations? For instance, does this support HMatrix, Repa or Accelerate or some other way of getting gradients for functions involving matrices? I'm looking for...

improvement
major refactoring

If I want to give a custom or known gradient for a function, how can I do that in this library? (I don't want to autodifferentiate through this function.) I...

improvement

Was showing off nesting, simplest example I could think of, oops. **tldr:** `ad` 4.3.2.1, `ghc` 8.0.1, `diff (\x -> diff (*x) 1) 1` gets a type error. ```` $ cabal...

wontfix
limitation
impossible

The directional derivative operator `du` doesn't seem to handle the simplest case. ``` $ ghci GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading...

bikeshedding
limitation

```haskell import Data.Functor.Identity import Numeric.AD (auto) import Numeric.AD.Newton ex :: [ (Double , Identity Double) ] ex = constrainedDescent (const (auto 1)) cs (Identity 18.5) where cs = [ CC...

Are there any plans for a multi-variate Newton-Raphson or Halley root finder? If not, I might contribute one.

It would be nice to be able to use AD with Accelerate types (like `Exp Double` or `Exp (Complex Double)`) I think the big issues are: 1. A lot of...

Consider the function, > let f [m,b] = (6020.272727*m+b-23680.30303)^(2::Int) + (7254.196429*m+b-28807.61607)^(2::Int) + (6738.575342*m+b-26582.76712)^(2::Int) + (5464.658537*m+b-23894.34756)^(2::Int) It's a smooth convex function and gradient descent should converge to `[m,b] = [2.911525576,7196.512447]`. See...