SpecialFunctions.jl icon indicating copy to clipboard operation
SpecialFunctions.jl copied to clipboard

Jacobi elliptic functions and complete elliptic integral of the first kind

Open ettersi opened this issue 7 years ago • 17 comments

Implements the Jacobi elliptic functions and the complete elliptic integral of the first kind.

Both sets of functions work for real and complex arguments as well as all FloatXX and BigFloat types. The test sets involve comparison with Python's mpmath library (http://mpmath.org/) as well as checks that the results are within a few eps() of the exact result.

ettersi avatar Mar 05 '18 18:03 ettersi

Thanks! Is this based on the mpmath code? If so, that's fine as it's a compatible licence, but we need to acknowledge the original copyright.

simonbyrne avatar Mar 05 '18 19:03 simonbyrne

I skimmed through the mpmath codes (and a few others), but at most I took ideas, no code.

ettersi avatar Mar 05 '18 19:03 ettersi

Overall, this looks pretty good to me. It would be nice to have a few more comments, particularly as to how the generated functions work and why they're needed.

I'm sure @stevengj will have some thoughts as well.

simonbyrne avatar Mar 06 '18 00:03 simonbyrne

Actually, I am still slightly hesitant about all the generated functions. It would be nice if we could reduce their use a bit.

  • I think the ellipj_* functions could be written as simple iterators, does that impact performance?
  • shrinkm is a bit more difficult, but currently it is unstable for arbitrary precision types like BigFloat (well, it should be, see my comment on puresqrt). Perhaps you could have it return tuples for fixed-precision types (Float64, Float32, etc.) and arrays for BigFloat?

simonbyrne avatar Mar 06 '18 00:03 simonbyrne

The main reason for the generated functions is to avoid heap allocation. The structure of the algorithms is to first compute k_i for i = 1:nsteps and then use them in the order reverse(1:nsteps). Without generated functions, I would have to either

  • allocate an array on the heap, or
  • recompute them from the last k_i. I don't like the heap allocation because it's performance impact can be a bit unpredictable, and recomputing them is probably slower and could affect numerical accuracy. Because of the "up and down" structure of the algorithm, I am not sure what would be gained if I implemented the ellipj_shrinkm and ellipj_growm functions as iterators.

Regarding the instability of BigFloats: as far as I understand, the only difference is that eps(BigFloat) is no longer pure, hence the number of Landen transformation steps has to be computed at runtime and the calls in ellipj_dispatch have to be resolved at runtime rather than compile-time. That seems like a small price to pay, though, especially since the alternative would be to do heap allocation of the k_i which would probably have a similar performance penalty. Making the shrinkm function switch between returning tuples or arrays would either lead to code duplication or at least complicate the rather simple code we have at the moment. Both alternatives seem undesirable for maintenance. If you are worried about the number of methods that will be compiled because of this generated functions business, I don't think that will be much of a concern. Most users probably don't bother changing the precision of BigFloats, and even if they do, the Landen transformation converges quadratically so even nsteps(1e-300, Complex) is just 8.

There is a slight hick-up in that currently we compute the number of Landen steps twice for BigFloats falling into the last two branches of ellipj_dispatch. I'll fix that.

ettersi avatar Mar 06 '18 10:03 ettersi

What I meant was

function ellipj_growm(sn,cn,dn, k)
    # [1, Sec 16.12] / https://dlmf.nist.gov/22.7.i
    for kk in reverse(k)
        sn,cn,dn = (1+kk)*sn/(1+kk*sn^2), cn*dn/(1+kk*sn^2), (1-kk*sn^2)/(1+kk*sn^2)
                     # ^ Use [1, 16.9.1]. Idea taken from [2]
    end
    return sn,cn,dn
end

Basically, generated functions should be considered a weapon of last resort: they are powerful, but do add considerable complexity, both for the compiler (i.e. they make precompilation more difficult) and for humans reading the code.

As for BigFloats: these are already heap allocated, so array vs tuple for them are a bit of a wash (though, again, benchmarking would be useful).

simonbyrne avatar Mar 06 '18 18:03 simonbyrne

Right, I agree I overused @generated for these two functions. Sorry about that, fixed it.

I also agree that there probably isn't much point in avoiding heap allocations for BigFloats, but the types we want to optimise for are Float64 and Float32 and there it probably matters. And I'd still say there is not much point in having two versions of the same code because of this.

ettersi avatar Mar 06 '18 21:03 ettersi

The only remaining issues with this PR seem to be about the names of the functions. Currently, we have

  • ellipj following Matlab and SciPy
  • jpq for p,q in (s,c,d,n). This is my own invention.
  • ellipK compared to ellipticK (Matlab) and ellipk (SciPy).

Some more feedback regarding this would be helpful.

There further was some discussion regarding my use of generated functions and the @pure annotation. The current usage seems both correct and reasonable to me, but please let me know if there are any doubts remaining.

ettersi avatar Mar 20 '18 20:03 ettersi

Is there a timeline for this to merge?

MasonProtter avatar Mar 29 '18 02:03 MasonProtter

I couldn't find an appropriate place in the code to comment on this so I'll do it here if thats okay. Currently the algorithm doesn't know to use the analytic continuation for real valued arguments > 1:

julia> ellipK(2.0)
ERROR: DomainError:
Stacktrace:
 [1] ellipiK_agm(::Float64) at /Users/mason/.julia/v0.6/SpecialFunctions/src/elliptic.jl:207
 [2] ellipK(::Float64) at /Users/mason/.julia/v0.6/SpecialFunctions/src/elliptic.jl:214

whereas giving a complex argument will cause it to use the correct analytic continuation

julia> ellipK(2.0+0im)
1.3110287771460596 - 1.3110287771460596im

Is this intended behaviour?

MasonProtter avatar Mar 29 '18 02:03 MasonProtter

This is intentional and consistent with Base.sqrt:

julia> sqrt(-1)
ERROR: DomainError:
sqrt will only return a complex result if called with a complex argument. Try sqrt(complex(x)).
Stacktrace:
 [1] sqrt(::Int64) at ./math.jl:434

We could and possible should print a similar error message for ellipK, but I couldn't quite figure out how to do this on 0.6. Advice from a more experienced Julia user would be appreciated. I guess in 0.7 I should just call Base.Math.throw_complex_domainerror()?

ettersi avatar Mar 29 '18 07:03 ettersi

Repeating Mason Protter's question: Is there a timeline for this to merge?

ettersi avatar Jun 28 '18 03:06 ettersi

If you ever do 2nd kind elliptic integrals, please tag me, so I can benchmark and compare with https://github.com/nolta/Elliptic.jl , which is my current dependency.

(assuming of course they ever get merged)

Datseris avatar Sep 08 '18 20:09 Datseris

Once again, is there any timeline on getting this merged? If not, why?

ettersi avatar Feb 21 '19 16:02 ettersi

bump

jebej avatar Jul 14 '20 20:07 jebej

Here's a list of remaining issues with this PR:

  • [ ] Decide on function names (see https://github.com/JuliaMath/SpecialFunctions.jl/pull/79#pullrequestreview-101299389). Fixable in 15 minutes once the decision is made.
  • [ ] Bring this PR up to date with master, and update to Julia 1. Should be doable within an hour or so.
  • [ ] Sort out the @pure (see https://github.com/JuliaMath/SpecialFunctions.jl/pull/79#pullrequestreview-448496503). Might need a couple of hours to do proper benchmarking and coming up with a fix.
  • [ ] Decide whether we want to keep this implementation of ellipk or the one from https://github.com/JuliaMath/SpecialFunctions.jl/pull/135.

Regarding the last point: the advantage of this implementation is that it supports complex arguments and arbitrary number types. On the other hand, https://github.com/JuliaMath/SpecialFunctions.jl/pull/135 provides a uniform implementation for both ellipk and ellipe. Also, we should check whether there's a significant performance difference between the two codes.

Sorting this out might take a couple of days to read through the literature, benchmark, maybe implement ellipe in a similar style as the one in this PR, etc.

If anyone is willing to tackle these issues, please feel free to do so. I'd be happy to give a hand, but I won't invest any further time unless there's a clear path to how this PR might finally get merged.

ettersi avatar Jul 16 '20 09:07 ettersi

Sorry I haven't chimed in here, but @simonbyrne's comments all seem pretty reasonable to me and I had been waiting fo them to be addressed before I looked at this PR closely.

stevengj avatar Dec 15 '20 15:12 stevengj