ApproxFun.jl
ApproxFun.jl copied to clipboard
Improve use of AF in BifurcationKit
Hi,
This is not really an issue... I was wondering if you can improve the tutorial by using more idiomatic ApproxFun.
On a side note, if you execute the code, you will find. that the nonlinear system has difficulties to be solved near the second fold and I dont get why.
Doing so would provide an analog of followPath (chebfun).
I think this would be cleaner with OrthogonalPolynomialsQuasi.jl, which already has vector-like syntax...
PS Don't say it's collocation!!!
Thank you! The docs of OrthogonalPolynomialsQuasi.jl are quite small. How do you encode BC, nonlinear terms... by hand?
Hi,
I am not sure this is the right place but I follow your suggestion of using OrthogonalPolynomialsQuasi.jl
.
I guess I can do a Laplacian on (-1,1) with Dirichlet BC like
OrthogonalPolynomialsQuasi
n = 100
T = Chebyshev()
D = Derivative(axes(T,1))
D2a = (D^2 * T)
x_cheb = cos.((1:n-2) .* π ./ (n-1)) # interior Chebyshev points
Δ = [T[-1,1:n]';
D2a[x_cheb,1:n] + T[x_cheb,1:n];
T[1,1:n]']
But it brings two questions to me:
- how do I compute the Laplacian for functions on say (0,1) ? using
x_cheb = 0:0.1:1
? - how do I compute a 2d Laplacian with Dirichlet (or Neuman) BC? Sould I use
kron
or is there a more idiomatic way?
Cool! Some comments:
- OrthogonalPolynomialsQuasi.jl has been rebranded as ClassicalOrthogonalPolynomials.jl to make it more accessible. Your code should work as is.
- Better to use
ChebyshevGrid{1}(n-2)
which is lazy, or equivalentlyClassicalOrthogonalPolynomials.grid
- Might be cleaner to just make the basis finite-dimensional in the first instance:
Chebyshev()[:,1:n]
So putting together these suggestions one gets:
using ClassicalOrthogonalPolynomials
n = 100
T = Chebyshev()[:,1:n]
D = Derivative(axes(T,1))
D2a = (D^2 * T)
x_cheb = ChebyshevGrid{1}(n-2) # interior Chebyshev points
Δ = [T[-1,:]';
D2a[x_cheb,:] + T[x_cheb,:];
T[1,:]']
how do I compute the Laplacian for functions on say
(0,1)
? using x_cheb =0:0.1:1
?
You can create an affine-mapped basis via chebyshevt(0..1)
. Thus the code becomes:
using ClassicalOrthogonalPolynomials
import ClassicalOrthogonalPolynomials: grid
n = 100
T = chebyshevt(0..1)[:,1:n]
D = Derivative(axes(T,1))
D2a = (D^2 * T)
x_cheb = grid(T[:,1:n-2]) # interior Chebyshev points
Δ = [T[begin,:]';
D2a[x_cheb,:] + T[x_cheb,:];
T[end,:]']
how do I compute a 2d Laplacian with Dirichlet (or Neuman) BC? Should I use
kron
or is there a more idiomatic way?
This isn't built-in yet but you can of course do Kronecker products on the matrices themselves. Though as I'm sure you're aware collocation in rectangles is a bit nasty with the corners. If you have zero-Dirichlet conditions I'd recommend incorporating the conditions into the basis and using a p-FEM. In 1D this would look like:
julia> P = JacobiWeight(1,1) .* Jacobi(1,1);
julia> D = Derivative(axes(P,1));
julia> n = 10; D² = -((D*P)'*(D*P))[1:n,1:n]
10×10 BandedMatrix{Float64} with bandwidths (0, 0):
-2.66667 ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅
⋅ -6.4 ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ -10.2857 ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ -14.2222 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ -30.1176 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ -34.1053 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ -38.0952
So in 2D we would get the Kronecker product with the mass matrix:
julia> M = (P'P)[1:n,1:n]
10×10 BandedMatrix{Float64} with bandwidths (2, 2):
1.06667 0.0 -0.228571 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.609524 5.55112e-17 -0.203175 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
-0.228571 0.0 0.457143 0.0 -0.17316 ⋅ ⋅ ⋅ ⋅ ⋅
⋅ -0.203175 0.0 0.369408 0.0 -0.149184 ⋅ ⋅ ⋅ ⋅
⋅ ⋅ -0.17316 -1.38778e-17 0.3108 2.77556e-17 -0.130536 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ -0.149184 0.0 0.268531 -2.77556e-17 -0.115837 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ -0.130536 -4.16334e-17 0.236501 5.55112e-17 -0.104025 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ -0.115837 1.38778e-17 0.211352 0.0 -0.0943535
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ -0.104025 0.0 0.191066 1.38778e-17
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ -0.0943535 -1.38778e-17 0.174349
julia> Δ = kron(D², M) + kron(M,D²)
100×100 BandedMatrix{Float64} with bandwidths (20, 20):
-5.68889 0.0 0.609524 0.0 0.0 0.0 … ⋅ ⋅ ⋅ ⋅ ⋅
0.0 -8.45206 -1.4803e-16 0.541799 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.609524 0.0 -12.1905 0.0 0.46176 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.541799 0.0 -16.1555 0.0 0.397824 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.46176 3.70074e-17 -20.2227 -7.40149e-17 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.397824 0.0 -24.3469 … ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.348096 1.11022e-16 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.308899 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 … ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 … ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
0.0 0.0 0.0 0.0 0.0 0.0 ⋅ ⋅ ⋅ ⋅ ⋅
⋮ ⋮ ⋱ ⋮
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … -3.07446e-16 0.0 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 -3.62673e-16 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 -4.17966e-16 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 0.0 -4.73306e-16 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 0.0 0.0 -5.28678e-16
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … 0.0 0.0 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 5.68321 0.0 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ -1.05736e-15 4.9728 0.0 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … -14.0923 1.05736e-15 4.41284 0.0 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 1.58603e-15 -13.5659 -2.11471e-15 3.96285 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 4.41284 -5.28678e-16 -13.3025 0.0 3.59442
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 3.96285 0.0 -13.2249 -5.28678e-16
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.0 0.0 3.59442 5.28678e-16 -13.2837
To actually solve a Poisson problem we would need a 2D Legendre transform, which isn't clean yet but I can clean it up. For now I'll just specify the matrix of coefficients:
julia> C = zeros(n,n)
10×10 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> C[1:3,1:3] = randn(3,3)
3×3 Matrix{Float64}:
0.339085 0.448758 0.248473
1.71472 1.06902 -0.254599
0.373289 0.0294908 0.674605
julia> R = (P'*Legendre())[1:n,1:n] # <P', f>
10×10 BandedMatrix{Float64} with bandwidths (0, 2):
1.33333 0.0 -0.266667 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ 0.533333 -5.92119e-17 -0.228571 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ 0.342857 0.0 -0.190476 ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ 0.253968 0.0 -0.161616 ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ 0.20202 0.0 -0.13986 ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ 0.167832 -2.92806e-17 -0.123077 ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.14359 0.0 -0.109804 ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.12549 0.0 -0.0990712
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.111455 0.0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0.100251
julia> u = Δ \ (kron(R,R) * vec(C));
julia> U = reshape(u, n, n)
10×10 Matrix{Float64}:
-0.0784928 -0.0386516 -0.0114528 -0.00177565 -0.00103187 -0.000129925 -0.000166707 -2.24075e-5 -3.38429e-5 -5.00882e-6
-0.152645 -0.040927 -0.0212087 -0.00585697 -0.00267093 -0.00072795 -0.000385938 -0.000127209 -7.64141e-5 -2.88983e-5
-0.0161378 -0.00738356 -0.0167529 -0.00236663 -0.00373469 -0.000483824 -0.000733862 -0.00010039 -0.000154213 -2.29944e-5
-0.00660401 -0.00585697 -0.00732757 -0.00370533 -0.00300111 -0.00119845 -0.000837998 -0.000318702 -0.000205647 -8.09364e-5
-0.00114762 -0.000913581 -0.00377716 -0.000988462 -0.00249931 -0.000476317 -0.00096728 -0.000162483 -0.00028081 -4.55754e-5
-0.000420788 -0.00072795 -0.00150013 -0.00119845 -0.00145794 -0.000823485 -0.000765272 -0.000365527 -0.000271314 -0.000121119
-0.000169337 -0.000128855 -0.000739664 -0.000276393 -0.000969489 -0.000250987 -0.000654277 -0.000137456 -0.00027221 -5.10094e-5
-6.92015e-5 -0.000127209 -0.000309163 -0.000318702 -0.000497874 -0.000365527 -0.000420005 -0.00024918 -0.000208391 -0.000109408
-3.39967e-5 -2.52059e-5 -0.000154847 -6.76856e-5 -0.000281441 -8.90552e-5 -0.000272425 -6.82826e-5 -0.000148262 -3.18998e-5
-1.53629e-5 -2.88983e-5 -7.05142e-5 -8.09364e-5 -0.000139564 -0.000121119 -0.000155965 -0.000109408 -9.74417e-5 -5.92089e-5
There's clearly still a lot to do to make this cleaner though...and if you have boundary conditions it's a lot trickier. (ApproxFun's approach works well in ∞-dimensions but not so clean in finite dimensions. There's also the rank-2 stuff I did with @ajt60gaibb which could be re-implemented. On triangles we have a clean p-FEM with boundary conditions which is started in MultivariateOrthogonalPolynomials.jl but still don't have boundary conditions implemented.)
Wow!! Thank a lot for the detailed comment...
I think I can change the tutorial on the Chan problem now by featuring ClassicalOrthogonalPolynomials.jl in 1D.
Though as I'm sure you're aware collocation in rectangles is a bit nasty with the corners.
No, thank you. I guess the regularity of the domain matters in this smooth approximation. I also guess it be problematic in the case of Neumann BC, would it?
To actually solve a Poisson problem we would need a 2D Legendre transform
I dont get this. Do you have a reference so I don't bother you too much?
To actually solve a Poisson problem we would need a 2D Legendre transform I dont get this. Do you have a reference so I don't bother you too much?
It's actually pretty easy: first note that the 2D Chebyshev transform can be constructed via:
julia> using FastTransforms, ClassicalOrthogonalPolynomials
julia> x = ChebyshevGrid{1}(10);
julia> n,m = 3,4; f = (x,y) -> chebyshevt(n, x)chebyshevt(m,y);
julia> chebyshevtransform(f.(x', x))
10×10 Matrix{Float64}:
0.0 1.79736e-18 0.0 1.62102e-17 0.0 9.42055e-18 0.0 1.70874e-18 0.0 -9.27717e-18
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.11341e-18 0.0 7.41045e-17 0.0 -4.09803e-19 0.0 1.07597e-17 0.0 5.84104e-19
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 5.90641e-17 0.0 1.0 0.0 1.50729e-16 0.0 2.84217e-16 0.0 8.12948e-17
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 -2.19853e-17 0.0 3.47114e-16 0.0 -1.88411e-17 0.0 -1.22156e-17 0.0 -1.4348e-17
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 6.63185e-18 0.0 2.18604e-16 0.0 1.14231e-17 0.0 -4.61608e-19 0.0 5.85226e-18
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
The 2D Legendre transform is then best done using cheb2leg
using the workaround in https://github.com/JuliaApproximation/FastTransforms.jl/issues/135 for 2D:
julia> function cheb2leg2d(A::AbstractMatrix)
B = cheb2leg(A); cheb2leg(B')'
end
cheb2leg2d (generic function with 1 method)
julia> n,m = 3,4; f = (x,y) -> legendrep(n, x)legendrep(m,y)
#17 (generic function with 1 method)
julia> cheb2leg2d(chebyshevtransform(f.(x',x)))
10×10 adjoint(::Matrix{Float64}) with eltype Float64:
0.0 -2.15114e-17 0.0 9.6853e-17 0.0 1.35234e-17 0.0 4.64116e-18 0.0 -3.28115e-18
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 6.13301e-17 0.0 -1.44889e-16 0.0 -3.0644e-17 0.0 7.0058e-17 0.0 1.17616e-18
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 -2.34741e-17 0.0 1.0 0.0 -5.64224e-17 0.0 1.4927e-16 0.0 3.69325e-16
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 -4.39458e-18 0.0 2.26976e-16 0.0 -6.76889e-17 0.0 -1.92562e-17 0.0 -4.97304e-17
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 -1.88093e-17 0.0 4.23769e-16 0.0 4.91028e-18 0.0 1.72571e-17 0.0 1.0728e-17
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
OK,
I have this working:
using Revise
using BifurcationKit, LinearAlgebra, Plots, Setfield, Parameters, ClassicalOrthogonalPolynomials, ForwardDiff
N(x; a = 0.5, b = 0.01) = 1 + (x + a*x^2)/(1 + b*x^2)
function F_chan(x, p)
@unpack α, β, Δ, T = p
f = Δ * x
z = T * x
f[2:end-1] .+= @views α .* N.(z, b = β)
f[begin] -= β
f[end] -= β
return f
end
Jmat(x,p) = ForwardDiff.jacobian(z->F_chan(z, p), x)
n = 100
T = chebyshevt(0..1)[:,1:n]
D = Derivative(axes(T,1))
D2a = (D^2 * T)
x_cheb = ClassicalOrthogonalPolynomials.grid(T[:,1:n-2]) # interior Chebyshev points
Δ = [T[begin,:]';
D2a[x_cheb,:];
T[end,:]']
n = 100
par = (α = 3.3, β = 0.01, Δ = Δ, T = T[x_cheb,:])
sol = -[(i-1)*(n-i)/n^2+0.1 for i=1:n]
optnewton = NewtonPar(tol = 1e-11, verbose = true)
# ca fait dans les 63.59k Allocations
out, hist, flag = @time newton( F_chan, Jmat, sol, par, optnewton)
If you think it can be improved, please go ahead and I will then post it.
I am not sure of how I handle the nonlinear term, do you agree with this?
Does this work?
julia> f = T * (T \ exp.(x))
ChebyshevT{Float64} * [1.2660658777520084, 1.1303182079849703, 0.27149533953407656, 0.04433684984866379, 0.0054742404420936785, 0.0005429263119139232, 4.497732295427654e-5, 3.19843646253781e-6, 1.992124804817033e-7, 1.1036771869970875e-8 … ]
julia> sinf = T * (T \ sin.(f))
ChebyshevT{Float64} * [0.6489869376927372, 0.16155034616969638, -0.2298835446182125, -0.13385842579378535, -0.03597173322160851, -0.0032284113891991517, 0.0017456898371917288, 0.001042451664704653, 0.0003258985223400549, 6.738862804269279e-5 … ]
julia> sinf[0.1] ≈ sin(f[0.1])
true
Indeed...
Great. I would like to add it so that this is done automatically for e.g. sin.(f)
though there is more thought needed whether this is a good idea or better to leave it un-expanded and the "user" does it manually.
Then ApproxFun.jl will be the more user-friendly front end to take care of this.