Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive explicit extrapolation #714

Merged
merged 31 commits into from
Apr 27, 2019
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
b447213
Adapted algorithms.jl, alg_utils.jl, OrdinaryDiffEg.jl for new algori…
Mar 1, 2019
a2f6adb
Added first version of caches for ExtrapolationMidpointDeuflhard in e…
Mar 3, 2019
2990b2c
Minor corrections in alg_utils.jl and extrapolation_caches.jl
Mar 4, 2019
2b51658
Micro-commit of implementing ExtrapolationMidpointDeuflhard.
Mar 10, 2019
78c8bdb
First solver
Mar 20, 2019
68fbad8
Debugged first version of
Mar 21, 2019
a49313b
Cleaned code and comments
Mar 22, 2019
3374c35
Added ExtrapolationMidpointDeuflhard with mutable cache
Mar 28, 2019
257ab84
Added handling of non-adaptive behavior.
Mar 29, 2019
5a1c2d9
Polished notation
Mar 29, 2019
e7ceaaa
Added ExtrapolationMidpointHairerWanner algorithm
Apr 5, 2019
781ac67
Added warnings to the constructors of ExtrapolationMidpointDeuflhard …
Apr 5, 2019
4d48247
Export ExtrapolationMidpointHairerWanner in OrdinaryDiffEq.jl
Apr 5, 2019
3fb696b
cleaned up test
Apr 6, 2019
36be1d9
Corrected `get_current_adaptive_order`
Apr 7, 2019
56150bc
Refactored ExtrapolationMidpointDeuflhardConstantCache
Apr 7, 2019
a2e15d6
Microcommit, WIP
Apr 7, 2019
e7f9383
Fixed microcommit
ChrisRackauckas Apr 27, 2019
0012a27
Refactored all ExtrapolationMidpoint* caches
Apr 12, 2019
3bb27e5
Cleaned repository
Apr 12, 2019
0fb345f
Changed u_tilde to utilde
Apr 12, 2019
0d51a01
Corrected refactoring og ExplicitMidpoint* caches
Apr 15, 2019
24c8cd3
Cleaned up comments
Apr 15, 2019
3a2a4a9
Added first version of convergence tests
Apr 15, 2019
d99152f
Cleaned ode_extrapolation_tests.jl
Apr 19, 2019
9b3ed55
Added handling of invalid inputs for the argument `sequence_symbol`
Apr 19, 2019
aab7438
Corrected selection of minimal possible order for ExtrapolationMidpo…
Apr 22, 2019
2b4626f
Provided test problems where the timespan is given by BigFloats
Apr 22, 2019
ed420fb
Merge refactors into extrapolation
ChrisRackauckas Apr 27, 2019
e080e64
Use @..
ChrisRackauckas Apr 27, 2019
ec087d7
Merge branch 'master' into explicit_extrapolation
ChrisRackauckas Apr 27, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/OrdinaryDiffEq.jl
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ module OrdinaryDiffEq
export AutoSwitch, AutoTsit5, AutoDP5,
AutoVern6, AutoVern7, AutoVern8, AutoVern9

export RichardsonEuler
export RichardsonEuler, ExtrapolationMidpointDeuflhard, ExtrapolationMidpointHairerWanner

export NLNewton, NLAnderson, NLFunctional
end # module
13 changes: 13 additions & 0 deletions src/alg_utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,11 @@ get_current_alg_order(alg::QNDF,cache) = cache.order
get_current_adaptive_order(alg::QNDF,cache) = cache.order
get_current_adaptive_order(alg::OrdinaryDiffEqExtrapolationVarOrderVarStepAlgorithm,cache) = cache.cur_order
get_current_alg_order(alg::OrdinaryDiffEqExtrapolationVarOrderVarStepAlgorithm,cache) = cache.cur_order
get_current_alg_order(alg::ExtrapolationMidpointDeuflhard,cache) = 2(cache.n_curr + 1)
get_current_adaptive_order(alg::ExtrapolationMidpointDeuflhard,cache) = 2cache.n_curr + 1
AlthausKonstantin marked this conversation as resolved.
Show resolved Hide resolved
get_current_alg_order(alg::ExtrapolationMidpointHairerWanner,cache) = 2(cache.n_curr + 1)
get_current_adaptive_order(alg::ExtrapolationMidpointHairerWanner,cache) = 2cache.n_curr + 1


#alg_adaptive_order(alg::OrdinaryDiffEqAdaptiveAlgorithm) = error("Algorithm is adaptive with no order")
get_current_adaptive_order(alg::OrdinaryDiffEqAlgorithm,cache) = alg_adaptive_order(alg)
Expand Down Expand Up @@ -305,6 +310,8 @@ alg_order(alg::MEBDF2) = 2

alg_maximum_order(alg) = alg_order(alg)
alg_maximum_order(alg::CompositeAlgorithm) = maximum(alg_order(x) for x in alg.algs)
alg_maximum_order(alg::ExtrapolationMidpointDeuflhard) = 2(alg.n_max+1)
alg_maximum_order(alg::ExtrapolationMidpointHairerWanner) = 2(alg.n_max+1)

alg_adaptive_order(alg::ExplicitRK) = alg.tableau.adaptiveorder
alg_adaptive_order(alg::OrdinaryDiffEqAlgorithm) = alg_order(alg)-1
Expand Down Expand Up @@ -339,16 +346,22 @@ beta2_default(alg::FunctionMap) = 0
beta2_default(alg::DP8) = 0//1
beta2_default(alg::DP5) = 4//100
beta2_default(alg::DP5Threaded) = 4//100
beta2_default(alg::ExtrapolationMidpointDeuflhard) = 0//1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an issue with PI adaptivity here?

beta2_default(alg::ExtrapolationMidpointHairerWanner) = 0//1

beta1_default(alg::OrdinaryDiffEqAlgorithm,beta2) = 7//(10alg_order(alg))
beta1_default(alg::FunctionMap,beta2) = 0
beta1_default(alg::DP8,beta2) = typeof(beta2)(1//alg_order(alg)) - beta2/5
beta1_default(alg::DP5,beta2) = typeof(beta2)(1//alg_order(alg)) - 3beta2/4
beta1_default(alg::DP5Threaded,beta2) = typeof(beta2)(1//alg_order(alg)) - 3beta2/4
beta1_default(alg::ExtrapolationMidpointDeuflhard,beta2) = 1//(2alg.n_init+1)
beta1_default(alg::ExtrapolationMidpointHairerWanner,beta2) = 1//(2alg.n_init+1)

gamma_default(alg::OrdinaryDiffEqAlgorithm) = 9//10
gamma_default(alg::RKC) = 8//10
gamma_default(alg::IRKC) = 8//10
gamma_default(alg::ExtrapolationMidpointDeuflhard) = (1//4)^beta1_default(alg,beta2_default(alg))
gamma_default(alg::ExtrapolationMidpointHairerWanner) = (65//100)^beta1_default(alg,beta2_default(alg))

qsteady_min_default(alg::OrdinaryDiffEqAlgorithm) = 1
qsteady_max_default(alg::OrdinaryDiffEqAlgorithm) = 1
Expand Down
61 changes: 61 additions & 0 deletions src/algorithms.jl
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,67 @@ struct RichardsonEuler <: OrdinaryDiffEqExtrapolationVarOrderVarStepAlgorithm
end
RichardsonEuler(;max_order=9,min_order=1,init_order=5) = RichardsonEuler(max_order,min_order,init_order)

struct ExtrapolationMidpointDeuflhard <: OrdinaryDiffEqExtrapolationVarOrderVarStepAlgorithm
n_min::Int # Minimal extrapolation order
n_init::Int # Initial extrapolation order
n_max::Int # Maximal extrapolation order
sequence_symbol::Symbol # Name of the subdividing sequence
end
function ExtrapolationMidpointDeuflhard(;min_extrapolation_order=1,init_extrapolation_order=5, max_extrapolation_order=10, sequence_symbol = :harmonic)
# Enforce 1 <= min_extrapolation_order <= init_extrapolation_order <= max_extrapolation_order:
n_min = max(1,min_extrapolation_order)
n_init = max(n_min,init_extrapolation_order)
n_max = max(n_init,max_extrapolation_order)

# Warn user if orders have been changed
if (min_extrapolation_order, init_extrapolation_order, max_extrapolation_order) != (n_min,n_init,n_max)
@warn "The range of extrapolation orders and/or the initial order given to the
`ExtrapolationMidpointDeuflhard` algorithm are not valid and have been changed:
Minimal order: " * lpad(min_extrapolation_order,2," ") * " --> " * lpad(n_min,2," ") * "
Maximal order: " * lpad(max_extrapolation_order,2," ") * " --> " * lpad(n_max,2," ") * "
Initial order: " * lpad(init_extrapolation_order,2," ") * " --> " * lpad(n_init,2," ")
end

# Enforce sequence_symbol in [:harmonic,:romberg,:bulirsch]:
if sequence_symbol != :harmonic && sequence_symbol != :romberg && sequence_symbol != :bulirsch
error("`sequence_symbol` must match `:harmonic`, `:romberg` or `:bulirsch`")
end

# Initialize algorithm
ExtrapolationMidpointDeuflhard(n_min,n_init,n_max,sequence_symbol)
end

struct ExtrapolationMidpointHairerWanner <: OrdinaryDiffEqExtrapolationVarOrderVarStepAlgorithm
n_min::Int # Minimal extrapolation order
n_init::Int # Initial extrapolation order
n_max::Int # Maximal extrapolation order
sequence_symbol::Symbol # Name of the subdividing sequence
end
function ExtrapolationMidpointHairerWanner(;min_extrapolation_order=2,init_extrapolation_order=5, max_extrapolation_order=10, sequence_symbol = :harmonic)
# Enforce 2 <= min_extrapolation_order
# and min_extrapolation_order + 1 <= init_extrapolation_order <= max_extrapolation_order - 1:
n_min = max(2,min_extrapolation_order)
n_init = max(n_min + 1,init_extrapolation_order)
n_max = max(n_init + 1,max_extrapolation_order)

# Warn user if orders have been changed
if (min_extrapolation_order, init_extrapolation_order, max_extrapolation_order) != (n_min,n_init,n_max)
@warn "The range of extrapolation orders and/or the initial order given to the
`ExtrapolationMidpointHairerWanner` algorithm are not valid and have been changed:
Minimal order: " * lpad(min_extrapolation_order,2," ") * " --> " * lpad(n_min,2," ") * "
Maximal order: " * lpad(max_extrapolation_order,2," ") * " --> " * lpad(n_max,2," ") * "
Initial order: " * lpad(init_extrapolation_order,2," ") * " --> " * lpad(n_init,2," ")
end

# Enforce sequence_symbol in [:harmonic,:romberg,:bulirsch]:
if sequence_symbol != :harmonic && sequence_symbol != :romberg && sequence_symbol != :bulirsch
error("`sequence_symbol` must match `:harmonic`, `:romberg` or `:bulirsch`")
end

# Initialize algorithm
ExtrapolationMidpointHairerWanner(n_min,n_init,n_max,sequence_symbol)
end

struct RK46NL <: OrdinaryDiffEqAlgorithm end
struct Heun <: OrdinaryDiffEqAdaptiveAlgorithm end
struct Ralston <: OrdinaryDiffEqAdaptiveAlgorithm end
Expand Down
255 changes: 255 additions & 0 deletions src/caches/extrapolation_caches.jl
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,258 @@ function alg_cache(alg::RichardsonEuler,u,rate_prototype,uEltypeNoUnits,uBottomE
step_no = zero(Int)
RichardsonEulerConstantCache(dtpropose,T,cur_order,work,A,step_no)
end

@cache mutable struct ExtrapolationMidpointDeuflhardConstantCache{QType} <: OrdinaryDiffEqConstantCache
# Values that are mutated
Q::Vector{QType} # Storage for stepsize scaling factors. Q[n] contains information for extrapolation order (n + alg.n_min - 1)
n_curr::Int64 # Storage for the current extrapolation order
n_old::Int64 # Storage for the extrapolation order n_curr before perfom_step! changes the latter

# Constant values
subdividing_sequence::Array{BigInt,1} # subdividing_sequence[n] is used for the (n -1)th internal discretisation
stage_number::Array{Int,1} # Stage_number[n] contains information for extrapolation order (n + alg.n_min - 1)
# Weights and Scaling factors for extrapolation operators
extrapolation_weights::Array{Rational{BigInt},2}
Copy link
Member

@ChrisRackauckas ChrisRackauckas Apr 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these need to be Rational? Maybe it doesn't matter all that much, but Bigs are allocating, so it would be worthwhile to profile if the calculations on these actually end up being costly, and whether just having them as Float64 may be numerically stable enough to get the job done. If Float64 is enough, then typing these to make t or u would make sense.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't played with Floats here so far.

The good thing about theBigInts is that the weights are converted in perform_step to the type of u. That allows to adapt the precision of the problem.
I'd suspect that if one uses Float64 weights and say aBigFloat representation for the problem, the approximation error will not get any better than eps(Float64).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about just converting to u's type here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see - the caches are recreated for every call of solve, right? Yes, then the type of u should be an option.
I guess I will set up an experiment to check the stability of the weights!

BTW: Would it be possible to move the weights into the algorithm's struct? That way the computation of the tables for every solve call would be bypassed..

extrapolation_scalars::Array{Rational{BigInt},1}
# Weights and scaling factors for internal extrapolation operators (used for error estimate)
extrapolation_weights_2::Array{Rational{BigInt},2}
extrapolation_scalars_2::Array{Rational{BigInt},1}
end

function alg_cache(alg::ExtrapolationMidpointDeuflhard,u,rate_prototype,uEltypeNoUnits,uBottomEltypeNoUnits,tTypeNoUnits,uprev,uprev2,f,t,dt,reltol,p,calck,::Type{Val{false}})
@unpack n_min, n_init, n_max = alg

QType = tTypeNoUnits <: Integer ? typeof(qmin_default(alg)) : tTypeNoUnits # Cf. DiffEqBase.__init in solve.jl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

peculiar that the type of this is needed. I'll see where this goes...

Q = fill(zero(QType),n_max - n_min + 1)

n_curr = n_init

n_old = n_init

# Initialize subdividing_sequence:
if alg.sequence_symbol == :harmonic
subdividing_sequence = [BigInt(n+1) for n = 0:n_max]
elseif alg.sequence_symbol == :romberg
subdividing_sequence = [BigInt(2)^n for n = 0:n_max]
else # sequence_symbol == :bulirsch
subdividing_sequence = [n==0 ? BigInt(1) : (isodd(n) ? BigInt(2)^Int64(n/2 + 0.5) : 3BigInt(2^Int64(n/2 - 1))) for n = 0:n_max]
end

# Compute stage numbers
stage_number = [2sum(Int64.(subdividing_sequence[1:n+1])) - n for n = n_min:n_max]

# Compute nodes corresponding to subdividing_sequence
nodes = BigInt(1) .// subdividing_sequence .^ 2

# Compute barycentric weights for internal extrapolation operators
extrapolation_weights_2 = zeros(Rational{BigInt}, n_max, n_max)
extrapolation_weights_2[1,:] = ones(Rational{BigInt}, 1, n_max)
for n = 2:n_max
distance = nodes[2:n] .- nodes[n+1]
extrapolation_weights_2[1:(n-1), n] = extrapolation_weights_2[1:n-1, n-1] .// distance
extrapolation_weights_2[n, n] = 1 // prod(-distance)
end

# Compute barycentric weights for extrapolation operators
extrapolation_weights = zeros(Rational{BigInt}, n_max+1, n_max+1)
for n = 1:n_max
extrapolation_weights[n+1, (n+1) : (n_max+1)] = extrapolation_weights_2[n, n:n_max] // (nodes[n+1] - nodes[1])
extrapolation_weights[1, n] = 1 // prod(nodes[1] .- nodes[2:n])
end
extrapolation_weights[1, n_max+1] = 1 // prod(nodes[1] .- nodes[2:n_max+1])

# Rescale barycentric weights to obtain weights of 1. Barycentric Formula
for m = 1:(n_max+1)
extrapolation_weights[1:m, m] = - extrapolation_weights[1:m, m] .// nodes[1:m]
if 2 <= m
extrapolation_weights_2[1:m-1, m-1] = -extrapolation_weights_2[1:m-1, m-1] .// nodes[2:m]
end
end

# Compute scaling factors for internal extrapolation operators
extrapolation_scalars_2 = ones(Rational{BigInt}, n_max)
extrapolation_scalars_2[1] = -nodes[2]
for n = 1:(n_max-1)
extrapolation_scalars_2[n+1] = -extrapolation_scalars_2[n] * nodes[n+2]
end

# Compute scaling factors for extrapolation operators
extrapolation_scalars = -nodes[1] * [BigInt(1); extrapolation_scalars_2]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the amount of pre-computations is based on n_max, it actually matters in small enough settings. That is something that may be worth remembering to document, or at least test a little bit. BTW, at when n_max does overflow start to occur in the sequences?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will mention this in the documentation.

As far as I've tested the constructor, I have not been able to produce an overflow (for any sequence) if using BigInt. As I replaced BigInt by Int, I got an overflow for the Romberg sequence and n_max = 6

# Initialize the constant cache
ExtrapolationMidpointDeuflhardConstantCache(Q, n_curr, n_old, subdividing_sequence, stage_number, extrapolation_weights, extrapolation_scalars, extrapolation_weights_2, extrapolation_scalars_2)
end

@cache mutable struct ExtrapolationMidpointDeuflhardCache{uType,uNoUnitsType,rateType,QType} <: OrdinaryDiffEqMutableCache
# Values that are mutated
u_tilde::uType
u_temp1::uType
u_temp2::uType
tmp::uType # for get_tmp_cache()
T::Array{uType,1} # Storage for the internal discretisations obtained by the explicit midpoint rule
res::uNoUnitsType # Storage for the scaled residual of u and u_tilde

fsalfirst::rateType
k::rateType

# Begin of constant cache
Q::Vector{QType} # Storage for stepsize scaling factors. Q[n] contains information for extrapolation order (n + alg.n_min - 1)
n_curr::Int64 # Storage for the current extrapolation order
n_old::Int64 # Storage for the extrapolation order n_curr before perfom_step! changes the latter

# Constant values
subdividing_sequence::Array{BigInt,1} # subdividing_sequence[n] is used for the (n -1)th internal discretisation
stage_number::Array{Int,1} # Stage_number[n] contains information for extrapolation order (n + alg.n_min - 1)
# Weights and Scaling factors for extrapolation operators
extrapolation_weights::Array{Rational{BigInt},2}
extrapolation_scalars::Array{Rational{BigInt},1}
# Weights and scaling factors for internal extrapolation operators (used for error estimate)
extrapolation_weights_2::Array{Rational{BigInt},2}
extrapolation_scalars_2::Array{Rational{BigInt},1}
end

function alg_cache(alg::ExtrapolationMidpointDeuflhard,u,rate_prototype,uEltypeNoUnits,uBottomEltypeNoUnits,tTypeNoUnits,uprev,uprev2,f,t,dt,reltol,p,calck,::Type{Val{true}})
u_tilde = zero(u)
u_temp1 = zero(u)
u_temp2 = zero(u)
tmp = zero(u)
T = fill(zero(u), alg.n_max + 1)
res = uEltypeNoUnits.(zero(u))
fsalfirst = zero(rate_prototype)
k = zero(rate_prototype)

cc = alg_cache(alg,u,rate_prototype,uEltypeNoUnits,uBottomEltypeNoUnits,tTypeNoUnits,uprev,uprev2,f,t,dt,reltol,p,calck,Val{false})

ExtrapolationMidpointDeuflhardCache(u_tilde, u_temp1, u_temp2, tmp, T, res, fsalfirst, k,
cc.Q, cc.n_curr, cc.n_old, cc.subdividing_sequence, cc.stage_number, cc.extrapolation_weights, cc.extrapolation_scalars, cc.extrapolation_weights_2, cc.extrapolation_scalars_2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be easier to just stick the cc in here, using composition to improve code re-use. But again with the other comment, optimizing code simplicity and re-use can come in a later PR.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I decided against that to make the call @unpack subdividing_sequence = cache and such possible for both caches.

Would you prefer an if - else statement before calling @unpack in order to check if a cache is mutable or not?

end

@cache mutable struct ExtrapolationMidpointHairerWannerConstantCache{QType} <: OrdinaryDiffEqConstantCache
# Values that are mutated
Q::Vector{QType} # Storage for stepsize scaling factors. Q[n] contains information for extrapolation order (n - 1)
n_curr::Int64 # Storage for the current extrapolation order
n_old::Int64 # Storage for the extrapolation order n_curr before perfom_step! changes the latter

sigma::Rational{Int64} # Parameter for order selection

# Constant values
subdividing_sequence::Array{BigInt,1} # subdividing_sequence[n] is used for the (n -1)th internal discretisation
stage_number::Array{Int,1} # Stage_number[n] contains information for extrapolation order (n - 1)
# Weights and Scaling factors for extrapolation operators
extrapolation_weights::Array{Rational{BigInt},2}
extrapolation_scalars::Array{Rational{BigInt},1}
# Weights and scaling factors for internal extrapolation operators (used for error estimate)
extrapolation_weights_2::Array{Rational{BigInt},2}
extrapolation_scalars_2::Array{Rational{BigInt},1}
end

function alg_cache(alg::ExtrapolationMidpointHairerWanner,u,rate_prototype,uEltypeNoUnits,uBottomEltypeNoUnits,tTypeNoUnits,uprev,uprev2,f,t,dt,reltol,p,calck,::Type{Val{false}})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like some refactoring can make this re-use a lot of code from the previous cache function. That's a second order issue that we can handle later though.

Copy link
Author

@AlthausKonstantin AlthausKonstantin Apr 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true. Am I "allowed" to define a function producing the weights etc. somewhere? If so where?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just do it in another part of this file, and then call it from in here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I will change that!

@unpack n_min, n_init, n_max = alg

QType = tTypeNoUnits <: Integer ? typeof(qmin_default(alg)) : tTypeNoUnits # Cf. DiffEqBase.__init in solve.jl
Q = fill(zero(QType),n_max + 1)

n_curr = n_init

n_old = n_init

sigma = 9//10

# Initialize subdividing_sequence:
if alg.sequence_symbol == :harmonic
subdividing_sequence = [BigInt(n+1) for n = 0:n_max]
elseif alg.sequence_symbol == :romberg
subdividing_sequence = [BigInt(2)^n for n = 0:n_max]
else # sequence_symbol == :bulirsch
subdividing_sequence = [n==0 ? BigInt(1) : (isodd(n) ? BigInt(2)^Int64(n/2 + 0.5) : 3BigInt(2^Int64(n/2 - 1))) for n = 0:n_max]
end

# Compute stage numbers
stage_number = [2sum(Int64.(subdividing_sequence[1:n+1])) - n for n = 0:n_max]

# Compute nodes corresponding to subdividing_sequence
nodes = BigInt(1) .// subdividing_sequence .^ 2

# Compute barycentric weights for internal extrapolation operators
extrapolation_weights_2 = zeros(Rational{BigInt}, n_max, n_max)
extrapolation_weights_2[1,:] = ones(Rational{BigInt}, 1, n_max)
for n = 2:n_max
distance = nodes[2:n] .- nodes[n+1]
extrapolation_weights_2[1:(n-1), n] = extrapolation_weights_2[1:n-1, n-1] .// distance
extrapolation_weights_2[n, n] = 1 // prod(-distance)
end

# Compute barycentric weights for extrapolation operators
extrapolation_weights = zeros(Rational{BigInt}, n_max+1, n_max+1)
for n = 1:n_max
extrapolation_weights[n+1, (n+1) : (n_max+1)] = extrapolation_weights_2[n, n:n_max] // (nodes[n+1] - nodes[1])
extrapolation_weights[1, n] = 1 // prod(nodes[1] .- nodes[2:n])
end
extrapolation_weights[1, n_max+1] = 1 // prod(nodes[1] .- nodes[2:n_max+1])

# Rescale barycentric weights to obtain weights of 1. Barycentric Formula
for m = 1:(n_max+1)
extrapolation_weights[1:m, m] = - extrapolation_weights[1:m, m] .// nodes[1:m]
if 2 <= m
extrapolation_weights_2[1:m-1, m-1] = -extrapolation_weights_2[1:m-1, m-1] .// nodes[2:m]
end
end

# Compute scaling factors for internal extrapolation operators
extrapolation_scalars_2 = ones(Rational{BigInt}, n_max)
extrapolation_scalars_2[1] = -nodes[2]
for n = 1:(n_max-1)
extrapolation_scalars_2[n+1] = -extrapolation_scalars_2[n] * nodes[n+2]
end

# Compute scaling factors for extrapolation operators
extrapolation_scalars = -nodes[1] * [BigInt(1); extrapolation_scalars_2]

# Initialize the constant cache
ExtrapolationMidpointHairerWannerConstantCache(Q, n_curr, n_old, sigma, subdividing_sequence, stage_number, extrapolation_weights, extrapolation_scalars, extrapolation_weights_2, extrapolation_scalars_2)
end

@cache mutable struct ExtrapolationMidpointHairerWannerCache{uType,uNoUnitsType,rateType,QType} <: OrdinaryDiffEqMutableCache
# Values that are mutated
u_tilde::uType
u_temp1::uType
u_temp2::uType
tmp::uType # for get_tmp_cache()
T::Array{uType,1} # Storage for the internal discretisations obtained by the explicit midpoint rule
res::uNoUnitsType # Storage for the scaled residual of u and u_tilde

fsalfirst::rateType
k::rateType

# Begin of constant cache
Q::Vector{QType} # Storage for stepsize scaling factors. Q[n] contains information for extrapolation order (n - 1)
n_curr::Int64 # Storage for the current extrapolation order
n_old::Int64 # Storage for the extrapolation order n_curr before perfom_step! changes the latter
sigma::Rational{Int64} # Parameter for order selection

# Constant values
subdividing_sequence::Array{BigInt,1} # subdividing_sequence[n] is used for the (n -1)th internal discretisation
stage_number::Array{Int,1} # Stage_number[n] contains information for extrapolation order (n - 1)
# Weights and Scaling factors for extrapolation operators
extrapolation_weights::Array{Rational{BigInt},2}
extrapolation_scalars::Array{Rational{BigInt},1}
# Weights and scaling factors for internal extrapolation operators (used for error estimate)
extrapolation_weights_2::Array{Rational{BigInt},2}
extrapolation_scalars_2::Array{Rational{BigInt},1}
end

function alg_cache(alg::ExtrapolationMidpointHairerWanner,u,rate_prototype,uEltypeNoUnits,uBottomEltypeNoUnits,tTypeNoUnits,uprev,uprev2,f,t,dt,reltol,p,calck,::Type{Val{true}})
u_tilde = zero(u)
u_temp1 = zero(u)
u_temp2 = zero(u)
tmp = zero(u)
T = fill(zero(u), alg.n_max + 1)
res = uEltypeNoUnits.(zero(u))
fsalfirst = zero(rate_prototype)
k = zero(rate_prototype)

cc = alg_cache(alg,u,rate_prototype,uEltypeNoUnits,uBottomEltypeNoUnits,tTypeNoUnits,uprev,uprev2,f,t,dt,reltol,p,calck,Val{false})

ExtrapolationMidpointHairerWannerCache(u_tilde, u_temp1, u_temp2, tmp, T, res, fsalfirst, k,
cc.Q, cc.n_curr, cc.n_old, cc.sigma, cc.subdividing_sequence, cc.stage_number, cc.extrapolation_weights, cc.extrapolation_scalars, cc.extrapolation_weights_2, cc.extrapolation_scalars_2)
end
Loading