Auto-regressive stochastic processes
This tutorial was generated using Literate.jl. Download the source as a .jl
file. Download the source as a .ipynb
file.
SDDP.jl assumes that the random variable in each node is independent of the random variables in all other nodes. However, a common request is to model the random variables by some auto-regressive process.
There are two ways to do this:
- model the random variable as a Markov chain
- use the "state-space expansion" trick
This tutorial is in the context of a hydro-thermal scheduling example, but it should be apparent how the ideas transfer to other applications.
using SDDP
import HiGHS
The state-space expansion trick
In An introduction to SDDP.jl, we assumed that the inflows were stagewise-independent. However, in many cases this is not correct, and inflow models are more accurately described by an auto-regressive process such as:
\[inflow_{t} = inflow_{t-1} + \varepsilon\]
Here $\varepsilon$ is a random variable, and the inflow in stage $t$ is the inflow in stage $t-1$ plus $\varepsilon$ (which might be negative).
For simplicity, we omit any coefficients and other terms, but this could easily be extended to a model like
\[inflow_{t} = a \times inflow_{t-1} + b + \varepsilon\]
In practice, you can estimate a distribution for $\varepsilon$ by fitting the chosen statistical model to historical data, and then using the empirical residuals.
To implement the auto-regressive model in SDDP.jl, we introduce inflow
as a state variable.
Our rule of thumb for "when is something a state variable?" is: if you need the value of a variable from a previous stage to compute something in stage $t$, then that variable is a state variable.
model = SDDP.LinearPolicyGraph(;
stages = 3,
sense = :Min,
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do sp, t
@variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)
@variable(sp, g_t >= 0)
@variable(sp, g_h >= 0)
@variable(sp, s >= 0)
@constraint(sp, g_h + g_t == 150)
c = [50, 100, 150]
@stageobjective(sp, c[t] * g_t)
# =========================================================================
# New stuff below Here
# Add inflow as a state
@variable(sp, inflow, SDDP.State, initial_value = 50.0)
# Add the random variable as a control variable
@variable(sp, ε)
# The equation describing our statistical model
@constraint(sp, inflow.out == inflow.in + ε)
# The new water balance constraint using the state variable
@constraint(sp, x.out == x.in - g_h - s + inflow.out)
# Assume we have some empirical residuals:
Ω = [-10.0, 0.1, 9.6]
SDDP.parameterize(sp, Ω) do ω
return JuMP.fix(ε, ω)
end
end
A policy graph with 3 nodes.
Node indices: 1, 2, 3
When can this trick be used?
The state-space expansion trick should be used when:
- The random variable appears additively in the objective or in the constraints. Something like
inflow * decision_variable
will not work. - The statistical model is linear, or can be written using the JuMP
@constraint
macro. - The dimension of the random variable is small (see Vector auto-regressive models for the multi-variate case).
The Markov chain approach
In the Markov chain approach, we model the stochastic process for inflow by a discrete Markov chain. Markov chains are nodes with transition probabilities between the nodes. SDDP.jl has good support for solving problems in which the uncertainty is formulated as a Markov chain.
The first step of the Markov chain approach is to write a function which simulates the stochastic process. Here is a simulator for our inflow model:
function simulator()
inflow = zeros(3)
current = 50.0
Ω = [-10.0, 0.1, 9.6]
for t in 1:3
current += rand(Ω)
inflow[t] = current
end
return inflow
end
simulator (generic function with 1 method)
When called with no arguments, it produces a vector of inflows:
simulator()
3-element Vector{Float64}:
50.1
40.1
49.7
The simulator
must return a Vector{Float64}
, so it is limited to a uni-variate random variable. It is possible to do something similar for multi-variate random variable, but you'll have to manually construct the Markov transition matrix, and solution times scale poorly, even in the two-dimensional case.
The next step is to call SDDP.MarkovianGraph
with our simulator. This function will attempt to fit a Markov chain to the stochastic process produced by your simulator
. There are two key arguments:
budget
is the total number of nodes we want in the Markov chainscenarios
is a limit on the number of times we can callsimulator
graph = SDDP.MarkovianGraph(simulator; budget = 8, scenarios = 30)
Root
(0, 0.0)
Nodes
(1, 52.262328741716715)
(1, 59.6)
(2, 54.49189725381223)
(2, 69.2)
(3, 44.875916339477676)
(3, 59.2)
(3, 59.202643060975205)
(3, 75.84679776502877)
Arcs
(0, 0.0) => (1, 52.262328741716715) w.p. 0.7
(0, 0.0) => (1, 59.6) w.p. 0.3
(1, 52.262328741716715) => (2, 54.49189725381223) w.p. 1.0
(1, 52.262328741716715) => (2, 69.2) w.p. 0.0
(1, 59.6) => (2, 54.49189725381223) w.p. 0.8888888888888888
(1, 59.6) => (2, 69.2) w.p. 0.1111111111111111
(2, 54.49189725381223) => (3, 59.202643060975205) w.p. 0.13793103448275862
(2, 54.49189725381223) => (3, 44.875916339477676) w.p. 0.5517241379310345
(2, 54.49189725381223) => (3, 59.2) w.p. 0.2413793103448276
(2, 54.49189725381223) => (3, 75.84679776502877) w.p. 0.06896551724137931
(2, 69.2) => (3, 59.202643060975205) w.p. 0.0
(2, 69.2) => (3, 44.875916339477676) w.p. 0.0
(2, 69.2) => (3, 59.2) w.p. 0.0
(2, 69.2) => (3, 75.84679776502877) w.p. 1.0
Here we can see we have created a MarkovianGraph with nodes like (2, 59.7)
. The first element of each node is the stage, and the second element is the inflow.
Create a SDDP.PolicyGraph
using graph
as follows:
model = SDDP.PolicyGraph(
graph; # <--- New stuff
sense = :Min,
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do sp, node
t, inflow = node # <--- New stuff
@variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)
@variable(sp, g_t >= 0)
@variable(sp, g_h >= 0)
@variable(sp, s >= 0)
@constraint(sp, g_h + g_t == 150)
c = [50, 100, 150]
@stageobjective(sp, c[t] * g_t)
# The new water balance constraint using the node:
@constraint(sp, x.out == x.in - g_h - s + inflow)
end
A policy graph with 8 nodes.
Node indices: (1, 52.262328741716715), (1, 59.6), (2, 54.49189725381223), (2, 69.2), (3, 44.875916339477676), (3, 59.2), (3, 59.202643060975205), (3, 75.84679776502877)
When can this trick be used?
The Markov chain approach should be used when:
- The random variable is uni-variate
- The random variable appears in the objective function or as a variable coefficient in the constraint matrix
- It's non-trivial to write the stochastic process as a series of constraints (for example, it uses nonlinear terms)
- The number of nodes is modest (for example, a budget of hundreds, up to perhaps 1000)
Vector auto-regressive models
The state-space expansion section assumed that the random variable was uni-variate. However, the approach naturally extends to vector auto-regressive models. For example, if inflow
is a 2-dimensional vector, then we can model a vector auto-regressive model to it as follows:
\[inflow_{t} = A \times inflow_{t-1} + b + \varepsilon\]
Here A
is a 2-by-2 matrix, and b
and $\varepsilon$ are 2-by-1 vectors.
model = SDDP.LinearPolicyGraph(;
stages = 3,
sense = :Min,
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do sp, t
@variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)
@variable(sp, g_t >= 0)
@variable(sp, g_h >= 0)
@variable(sp, s >= 0)
@constraint(sp, g_h + g_t == 150)
c = [50, 100, 150]
@stageobjective(sp, c[t] * g_t)
# =========================================================================
# New stuff below Here
# Add inflow as a state
@variable(sp, inflow[1:2], SDDP.State, initial_value = 50.0)
# Add the random variable as a control variable
@variable(sp, ε[1:2])
# The equation describing our statistical model
A = [0.8 0.2; 0.2 0.8]
@constraint(
sp,
[i = 1:2],
inflow[i].out == sum(A[i, j] * inflow[j].in for j in 1:2) + ε[i],
)
# The new water balance constraint using the state variable
@constraint(sp, x.out == x.in - g_h - s + inflow[1].out + inflow[2].out)
# Assume we have some empirical residuals:
Ω₁ = [-10.0, 0.1, 9.6]
Ω₂ = [-10.0, 0.1, 9.6]
Ω = [(ω₁, ω₂) for ω₁ in Ω₁ for ω₂ in Ω₂]
SDDP.parameterize(sp, Ω) do ω
JuMP.fix(ε[1], ω[1])
JuMP.fix(ε[2], ω[2])
return
end
end
A policy graph with 3 nodes.
Node indices: 1, 2, 3