How to implement a PAR model
A common stochastic model is the Periodic Autoregressive model:
\[y_t = \sum\limits_{p=1}^P \alpha_p y_{t-p} + \omega_t\]
To implement this model, you must first:
- compute the constants $\alpha_p$ separately
- construct a distribution for the noise term $\omega_t$
Then, implement the SDDP model as follows:
julia> using SDDP, HiGHSjulia> alpha = [0.5, 0.3, 0.2]3-element Vector{Float64}: 0.5 0.3 0.2julia> y_history = [1.0, 1.0, 1.0]3-element Vector{Float64}: 1.0 1.0 1.0julia> Ω = [-0.25, 0.0, 0.25]3-element Vector{Float64}: -0.25 0.0 0.25julia> model = SDDP.LinearPolicyGraph(; stages = 12, sense = :Min, lower_bound = 0.0, optimizer = HiGHS.Optimizer, ) do sp, t P = length(alpha) @variable(sp, y[1:P], SDDP.State, initial_value = 1.0) @variable(sp, omega) # Add the constraint for the PAR model @constraint(sp, y[1].out == sum(alpha[p] * y[p].in for p in 1:P) + omega) # Add a "shift" constraint to move the values of `y` between stages @constraint(sp, [p in 2:P], y[p].out == y[p-1].in) # Implement the PAR model by parameterizing the additive noise term SDDP.parameterize(sp, Ω) do ω fix(omega, ω; force = true) end endA policy graph with 12 nodes. Node indices: 1, ..., 12julia> simulations = SDDP.simulate(model, 1, [:y]);julia> y = [stage[:y][1].out for stage in simulations[1]]12-element Vector{Float64}: 0.75 1.125 0.9875 1.2312500000000002 0.8868750000000001 1.0103125000000002 1.0174687500000001 0.9892031250000002 0.7519046875000002 0.6262070312500001 0.4865155468750002 0.33150082031250017julia> sampling_scheme = SDDP.Historical(tuple.(1:3, [0.1, -0.1, 0.2]));julia> simulations = SDDP.simulate(model, 1, [:y]; sampling_scheme);julia> y = [stage[:y][1].out for stage in simulations[1]]3-element Vector{Float64}: 1.1 0.9500000000000001 1.205
Use a more sophisticated noise term that allows simulating actual sequences
If you want the ability to sample additive noise terms during training, but actual historical sequences during simulation, then you can use a more sophisticated noise term that allows you to control the SDDP.parameterize function:
julia> using SDDP, HiGHSjulia> alpha = [0.5, 0.3, 0.2]3-element Vector{Float64}: 0.5 0.3 0.2julia> y_history = [1.0, 1.0, 1.0]3-element Vector{Float64}: 1.0 1.0 1.0julia> struct NoiseTerm is_noise::Bool value::Float64 endjulia> Ω = NoiseTerm.(true, [-0.25, 0.0, 0.25])3-element Vector{Main.NoiseTerm}: Main.NoiseTerm(true, -0.25) Main.NoiseTerm(true, 0.0) Main.NoiseTerm(true, 0.25)julia> model = SDDP.LinearPolicyGraph(; stages = 12, sense = :Min, lower_bound = 0.0, optimizer = HiGHS.Optimizer, ) do sp, t P = length(alpha) @variable(sp, y[1:P], SDDP.State, initial_value = 1.0) @variable(sp, omega) @constraint(sp, y[1].out == sum(alpha[p] * y[p].in for p in 1:P) + omega) @constraint(sp, [p in 2:P], y[p].out == y[p-1].in) SDDP.parameterize(sp, Ω) do ω if ω.is_noise fix(omega, ω.value; force = true) else unfix(omega) fix(y[1].out, ω.value; force = true) end end endA policy graph with 12 nodes. Node indices: 1, ..., 12julia> simulations = SDDP.simulate(model, 1, [:y]);julia> y = [stage[:y][1].out for stage in simulations[1]]12-element Vector{Float64}: 1.0 1.25 1.125 1.1375 1.40625 1.2693750000000001 1.0340625 1.42909375 1.5286406250000002 1.3998609375000002 1.1943414062500002 1.0728571093750003julia> omega = NoiseTerm.(false, [1.0, 1.2, 1.4])3-element Vector{Main.NoiseTerm}: Main.NoiseTerm(false, 1.0) Main.NoiseTerm(false, 1.2) Main.NoiseTerm(false, 1.4)julia> sampling_scheme = SDDP.Historical(tuple.(1:3, omega));julia> simulations = SDDP.simulate(model, 1, [:y]; sampling_scheme);julia> y = [stage[:y][1].out for stage in simulations[1]]3-element Vector{Float64}: 1.0 1.2 1.4