# API Reference

## Policy graphs

`SDDP.Graph`

— Type`Graph(root_node::T) where T`

Create an empty graph struture with the root node `root_node`

.

**Example**

```
julia> graph = SDDP.Graph(0)
Root
0
Nodes
{}
Arcs
{}
julia> graph = SDDP.Graph(:root)
Root
root
Nodes
{}
Arcs
{}
julia> graph = SDDP.Graph((0, 0))
Root
(0, 0)
Nodes
{}
Arcs
{}
```

`SDDP.add_node`

— Function`add_node(graph::Graph{T}, node::T) where {T}`

Add a node to the graph `graph`

.

**Examples**

```
julia> graph = SDDP.Graph(:root);
julia> SDDP.add_node(graph, :A)
julia> graph
Root
root
Nodes
A
Arcs
{}
```

```
julia> graph = SDDP.Graph(0);
julia> SDDP.add_node(graph, 2)
julia> graph
Root
0
Nodes
2
Arcs
{}
```

`SDDP.add_edge`

— Function`add_edge(graph::Graph{T}, edge::Pair{T, T}, probability::Float64) where {T}`

Add an edge to the graph `graph`

.

**Examples**

```
julia> graph = SDDP.Graph(0);
julia> SDDP.add_node(graph, 1)
julia> SDDP.add_edge(graph, 0 => 1, 0.9)
julia> graph
Root
0
Nodes
1
Arcs
0 => 1 w.p. 0.9
```

```
julia> graph = SDDP.Graph(:root);
julia> SDDP.add_node(graph, :A)
julia> SDDP.add_edge(graph, :root => :A, 1.0)
julia> graph
Root
root
Nodes
A
Arcs
root => A w.p. 1.0
```

`SDDP.add_ambiguity_set`

— Function```
add_ambiguity_set(
graph::Graph{T},
set::Vector{T},
lipschitz::Vector{Float64},
) where {T}
```

Add `set`

to the belief partition of `graph`

.

`lipschitz`

is a vector of Lipschitz constants, with one element for each node in `set`

. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.

**Examples**

```
julia> graph = SDDP.LinearGraph(3)
Root
0
Nodes
1
2
3
Arcs
0 => 1 w.p. 1.0
1 => 2 w.p. 1.0
2 => 3 w.p. 1.0
julia> SDDP.add_ambiguity_set(graph, [1, 2], [1e3, 1e2])
julia> SDDP.add_ambiguity_set(graph, [3], [1e5])
julia> graph
Root
0
Nodes
1
2
3
Arcs
0 => 1 w.p. 1.0
1 => 2 w.p. 1.0
2 => 3 w.p. 1.0
Partitions
{1, 2}
{3}
```

`add_ambiguity_set(graph::Graph{T}, set::Vector{T}, lipschitz::Float64)`

Add `set`

to the belief partition of `graph`

.

`lipschitz`

is a Lipschitz constant for each node in `set`

. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.

**Examples**

```
julia> graph = SDDP.LinearGraph(3);
julia> SDDP.add_ambiguity_set(graph, [1, 2], 1e3)
julia> SDDP.add_ambiguity_set(graph, [3], 1e5)
julia> graph
Root
0
Nodes
1
2
3
Arcs
0 => 1 w.p. 1.0
1 => 2 w.p. 1.0
2 => 3 w.p. 1.0
Partitions
{1, 2}
{3}
```

`SDDP.LinearGraph`

— Function`LinearGraph(stages::Int)`

Create a linear graph with `stages`

number of nodes.

**Examples**

```
julia> graph = SDDP.LinearGraph(3)
Root
0
Nodes
1
2
3
Arcs
0 => 1 w.p. 1.0
1 => 2 w.p. 1.0
2 => 3 w.p. 1.0
```

`SDDP.MarkovianGraph`

— Function`MarkovianGraph(transition_matrices::Vector{Matrix{Float64}})`

Construct a Markovian graph from the vector of transition matrices.

`transition_matrices[t][i, j]`

gives the probability of transitioning from Markov state `i`

in stage `t - 1`

to Markov state `j`

in stage `t`

.

The dimension of the first transition matrix should be `(1, N)`

, and `transition_matrics[1][1, i]`

is the probability of transitioning from the root node to the Markov state `i`

.

**Examples**

```
julia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]])
Root
(0, 1)
Nodes
(1, 1)
(2, 1)
(2, 2)
(3, 1)
(3, 2)
Arcs
(0, 1) => (1, 1) w.p. 1.0
(1, 1) => (2, 1) w.p. 0.5
(1, 1) => (2, 2) w.p. 0.5
(2, 1) => (3, 1) w.p. 0.8
(2, 1) => (3, 2) w.p. 0.2
(2, 2) => (3, 1) w.p. 0.2
(2, 2) => (3, 2) w.p. 0.8
```

```
MarkovianGraph(;
stages::Int,
transition_matrix::Matrix{Float64},
root_node_transition::Vector{Float64},
)
```

Construct a Markovian graph object with `stages`

number of stages and time-independent Markov transition probabilities.

`transition_matrix`

must be a square matrix, and the probability of transitioning from Markov state `i`

in stage `t`

to Markov state `j`

in stage `t + 1`

is given by `transition_matrix[i, j]`

.

`root_node_transition[i]`

is the probability of transitioning from the root node to Markov state `i`

in the first stage.

**Examples**

```
julia> graph = SDDP.MarkovianGraph(;
stages = 3,
transition_matrix = [0.8 0.2; 0.2 0.8],
root_node_transition = [0.5, 0.5],
)
Root
(0, 1)
Nodes
(1, 1)
(1, 2)
(2, 1)
(2, 2)
(3, 1)
(3, 2)
Arcs
(0, 1) => (1, 1) w.p. 0.5
(0, 1) => (1, 2) w.p. 0.5
(1, 1) => (2, 1) w.p. 0.8
(1, 1) => (2, 2) w.p. 0.2
(1, 2) => (2, 1) w.p. 0.2
(1, 2) => (2, 2) w.p. 0.8
(2, 1) => (3, 1) w.p. 0.8
(2, 1) => (3, 2) w.p. 0.2
(2, 2) => (3, 1) w.p. 0.2
(2, 2) => (3, 2) w.p. 0.8
```

```
MarkovianGraph(
simulator::Function;
budget::Union{Int,Vector{Int}},
scenarios::Int = 1000,
)
```

Construct a Markovian graph by fitting Markov chain to scenarios generated by `simulator()`

.

`budget`

is the total number of nodes in the resulting Markov chain. This can either be specified as a single `Int`

, in which case we will attempt to intelligently distributed the nodes between stages. Alternatively, `budget`

can be a `Vector{Int}`

, which details the number of Markov state to have in each stage.

`SDDP.UnicyclicGraph`

— Function`UnicyclicGraph(discount_factor::Float64; num_nodes::Int = 1)`

Construct a graph composed of `num_nodes`

nodes that form a single cycle, with a probability of `discount_factor`

of continuing the cycle.

**Examples**

```
julia> graph = SDDP.UnicyclicGraph(0.9; num_nodes = 2)
Root
0
Nodes
1
2
Arcs
0 => 1 w.p. 1.0
1 => 2 w.p. 1.0
2 => 1 w.p. 0.9
```

`SDDP.LinearPolicyGraph`

— Function`LinearPolicyGraph(builder::Function; stages::Int, kwargs...)`

Create a linear policy graph with `stages`

number of stages.

**Keyword arguments**

`stages`

: the number of stages in the graph`kwargs`

: other keyword arguments are passed to`SDDP.PolicyGraph`

.

**Examples**

```
julia> SDDP.LinearPolicyGraph(; stages = 2, lower_bound = 0.0) do sp, t
# ... build model ...
end
A policy graph with 2 nodes.
Node indices: 1, 2
```

is equivalent to

```
julia> graph = SDDP.LinearGraph(2);
julia> SDDP.PolicyGraph(graph; lower_bound = 0.0) do sp, t
# ... build model ...
end
A policy graph with 2 nodes.
Node indices: 1, 2
```

`SDDP.MarkovianPolicyGraph`

— Function```
MarkovianPolicyGraph(
builder::Function;
transition_matrices::Vector{Array{Float64,2}},
kwargs...
)
```

Create a Markovian policy graph based on the transition matrices given in `transition_matrices`

.

**Keyword arguments**

`transition_matrices[t][i, j]`

gives the probability of transitioning from Markov state`i`

in stage`t - 1`

to Markov state`j`

in stage`t`

. The dimension of the first transition matrix should be`(1, N)`

, and`transition_matrics[1][1, i]`

is the probability of transitioning from the root node to the Markov state`i`

.`kwargs`

: other keyword arguments are passed to`SDDP.PolicyGraph`

.

**See also**

See `SDDP.MarkovianGraph`

for other ways of specifying a Markovian policy graph.

See `SDDP.PolicyGraph`

for the other keyword arguments.

**Examples**

```
julia> SDDP.MarkovianPolicyGraph(;
transition_matrices = [ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]],
lower_bound = 0.0,
) do sp, node
# ... build model ...
end
A policy graph with 5 nodes.
Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)
```

is equivalent to

```
julia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]]);
julia> SDDP.PolicyGraph(graph; lower_bound = 0.0) do sp, t
# ... build model ...
end
A policy graph with 5 nodes.
Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)
```

`SDDP.PolicyGraph`

— Type```
PolicyGraph(
builder::Function,
graph::Graph{T};
sense::Symbol = :Min,
lower_bound = -Inf,
upper_bound = Inf,
optimizer = nothing,
) where {T}
```

Construct a policy graph based on the graph structure of `graph`

. (See `SDDP.Graph`

for details.)

**Keyword arguments**

`sense`

: whether we are minimizing (`:Min`

) or maximizing (`:Max`

).`lower_bound`

: if mimimizing, a valid lower bound for the cost to go in all subproblems.`upper_bound`

: if maximizing, a valid upper bound for the value to go in all subproblems.`optimizer`

: the optimizer to use for each of the subproblems

**Examples**

```
function builder(subproblem::JuMP.Model, index)
# ... subproblem definition ...
end
model = PolicyGraph(
builder,
graph;
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
)
```

Or, using the Julia `do ... end`

syntax:

```
model = PolicyGraph(
graph;
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do subproblem, index
# ... subproblem definitions ...
end
```

## Subproblem definition

`SDDP.@stageobjective`

— Macro`@stageobjective(subproblem, expr)`

Set the stage-objective of `subproblem`

to `expr`

.

**Examples**

`@stageobjective(subproblem, 2x + y)`

`SDDP.parameterize`

— Function```
parameterize(
modify::Function,
subproblem::JuMP.Model,
realizations::Vector{T},
probability::Vector{Float64} = fill(1.0 / length(realizations))
) where {T}
```

Add a parameterization function `modify`

to `subproblem`

. The `modify`

function takes one argument and modifies `subproblem`

based on the realization of the noise sampled from `realizations`

with corresponding probabilities `probability`

.

In order to conduct an out-of-sample simulation, `modify`

should accept arguments that are not in realizations (but still of type T).

**Examples**

```
SDDP.parameterize(subproblem, [1, 2, 3], [0.4, 0.3, 0.3]) do ω
JuMP.set_upper_bound(x, ω)
end
```

`parameterize(node::Node, noise)`

Parameterize node `node`

with the noise `noise`

.

`SDDP.add_objective_state`

— Function`add_objective_state(update::Function, subproblem::JuMP.Model; kwargs...)`

Add an objective state variable to `subproblem`

.

Required `kwargs`

are:

`initial_value`

: The initial value of the objective state variable at the root node.`lipschitz`

: The lipschitz constant of the objective state variable.

Setting a tight value for the lipschitz constant can significantly improve the speed of convergence.

Optional `kwargs`

are:

`lower_bound`

: A valid lower bound for the objective state variable. Can be`-Inf`

.`upper_bound`

: A valid upper bound for the objective state variable. Can be`+Inf`

.

Setting tight values for these optional variables can significantly improve the speed of convergence.

If the objective state is `N`

-dimensional, each keyword argument must be an `NTuple{N,Float64}`

. For example, `initial_value = (0.0, 1.0)`

.

`SDDP.objective_state`

— Function`objective_state(subproblem::JuMP.Model)`

Return the current objective state of the problem.

Can only be called from `SDDP.parameterize`

.

`SDDP.Noise`

— Type`Noise(support, probability)`

An atom of a discrete random variable at the point of support `support`

and associated probability `probability`

.

## Training the policy

`SDDP.numerical_stability_report`

— Function```
numerical_stability_report(
[io::IO = stdout,]
model::PolicyGraph;
by_node::Bool = false,
print::Bool = true,
warn::Bool = true,
)
```

Print a report identifying possible numeric stability issues.

**Keyword arguments**

If

`by_node`

, print a report for each node in the graph.If

`print`

, print to`io`

.If

`warn`

, warn if the coefficients may cause numerical issues.

`SDDP.train`

— Function`SDDP.train(model::PolicyGraph; kwargs...)`

Train the policy for `model`

.

**Keyword arguments**

`iteration_limit::Int`

: number of iterations to conduct before termination.`time_limit::Float64`

: number of seconds to train before termination.`stoping_rules`

: a vector of`SDDP.AbstractStoppingRule`

s. Defaults to`SimulationStoppingRule`

.`print_level::Int`

: control the level of printing to the screen. Defaults to`1`

. Set to`0`

to disable all printing.`log_file::String`

: filepath at which to write a log of the training progress. Defaults to`SDDP.log`

.`log_frequency::Int`

: control the frequency with which the logging is outputted (iterations/log). It must be at least`1`

. Defaults to`1`

.`log_every_seconds::Float64`

: control the frequency with which the logging is outputted (seconds/log). Defaults to`0.0`

.`log_every_iteration::Bool`

; over-rides`log_frequency`

and`log_every_seconds`

to force every iteration to be printed. Defaults to`false`

.`run_numerical_stability_report::Bool`

: generate (and print) a numerical stability report prior to solve. Defaults to`true`

.`refine_at_similar_nodes::Bool`

: if SDDP can detect that two nodes have the same children, it can cheaply add a cut discovered at one to the other. In almost all cases this should be set to`true`

.`cut_deletion_minimum::Int`

: the minimum number of cuts to cache before deleting cuts from the subproblem. The impact on performance is solver specific; however, smaller values result in smaller subproblems (and therefore quicker solves), at the expense of more time spent performing cut selection.`risk_measure`

: the risk measure to use at each node. Defaults to`Expectation`

.`sampling_scheme`

: a sampling scheme to use on the forward pass of the algorithm. Defaults to`InSampleMonteCarlo`

.`backward_sampling_scheme`

: a backward pass sampling scheme to use on the backward pass of the algorithm. Defaults to`CompleteSampler`

.`cut_type`

: choose between`SDDP.SINGLE_CUT`

and`SDDP.MULTI_CUT`

versions of SDDP.`dashboard::Bool`

: open a visualization of the training over time. Defaults to`false`

.`parallel_scheme::AbstractParallelScheme`

: specify a scheme for solving in parallel. Defaults to`Serial()`

.`forward_pass::AbstractForwardPass`

: specify a scheme to use for the forward passes.`forward_pass_resampling_probability::Union{Nothing,Float64}`

: set to a value in`(0, 1)`

to enable`RiskAdjustedForwardPass`

. Defaults to`nothing`

(disabled).`add_to_existing_cuts::Bool`

: set to`true`

to allow training a model that was previously trained. Defaults to`false`

.`duality_handler::AbstractDualityHandler`

: specify a duality handler to use when creating cuts.`post_iteration_callback::Function`

: a callback with the signature`post_iteration_callback(::IterationResult)`

that is evaluated after each iteration of the algorithm.

There is also a special option for infinite horizon problems

`cycle_discretization_delta`

: the maximum distance between states allowed on the forward pass. This is for advanced users only and needs to be used in conjunction with a different`sampling_scheme`

.

`SDDP.termination_status`

— Function`termination_status(model::PolicyGraph)::Symbol`

Query the reason why the training stopped.

`SDDP.write_cuts_to_file`

— Function```
write_cuts_to_file(
model::PolicyGraph{T},
filename::String;
node_name_parser::Function = string,
) where {T}
```

Write the cuts that form the policy in `model`

to `filename`

in JSON format.

`node_name_parser`

is a function which converts the name of each node into a string representation. It has the signature: `node_name_parser(::T)::String`

.

See also `SDDP.read_cuts_from_file`

.

`SDDP.read_cuts_from_file`

— Function```
read_cuts_from_file(
model::PolicyGraph{T},
filename::String;
node_name_parser::Function = _node_name_parser,
) where {T}
```

Read cuts (saved using `SDDP.write_cuts_to_file`

) from `filename`

into `model`

.

Since `T`

can be an arbitrary Julia type, the conversion to JSON is lossy. When reading, `read_cuts_from_file`

only supports `T=Int`

, `T=NTuple{N, Int}`

, and `T=Symbol`

. If you have manually created a policy graph with a different node type `T`

, provide a function `node_name_parser`

with the signature `node_name_parser(T, name::String)::T where {T}`

that returns the name of each node given the string name `name`

.

If `node_name_parser`

returns `nothing`

, those cuts are skipped.

See also `SDDP.write_cuts_to_file`

.

`SDDP.write_log_to_csv`

— Function`write_log_to_csv(model::PolicyGraph, filename::String)`

Write the log of the most recent training to a csv for post-analysis.

Assumes that the model has been trained via `SDDP.train`

.

### Stopping rules

`SDDP.AbstractStoppingRule`

— Type`AbstractStoppingRule`

The abstract type for the stopping-rule interface.

You need to define the following methods:

`SDDP.stopping_rule_status`

— Function`stopping_rule_status(::AbstractStoppingRule)::Symbol`

Return a symbol describing the stopping rule.

`SDDP.convergence_test`

— Function```
convergence_test(
model::PolicyGraph,
log::Vector{Log},
::AbstractStoppingRule,
)::Bool
```

Return a `Bool`

indicating if the algorithm should terminate the training.

`SDDP.IterationLimit`

— Type`IterationLimit(limit::Int)`

Teriminate the algorithm after `limit`

number of iterations.

`SDDP.TimeLimit`

— Type`TimeLimit(limit::Float64)`

Teriminate the algorithm after `limit`

seconds of computation.

`SDDP.Statistical`

— Type```
Statistical(;
num_replications::Int,
iteration_period::Int = 1,
z_score::Float64 = 1.96,
verbose::Bool = true,
disable_warning::Bool = false,
)
```

Perform an in-sample Monte Carlo simulation of the policy with `num_replications`

replications every `iteration_period`

s and terminate if the deterministic bound (lower if minimizing) falls into the confidence interval for the mean of the simulated cost.

If `verbose = true`

, print the confidence interval.

If `disable_warning = true`

, disable the warning telling you not to use this stopping rule (see below).

**Why this stopping rule is not good**

This stopping rule is one of the most common stopping rules seen in the literature. Don't follow the crowd. It is a poor choice for your model, and should be rarely used. Instead, you should use the default stopping rule, or use a fixed limit like a time or iteration limit.

To understand why this stopping rule is a bad idea, assume we have conducted `num_replications`

simulations and the objectives are in a vector `objectives::Vector{Float64}`

.

Our mean is `μ = mean(objectives)`

and the half-width of the confidence interval is `w = z_score * std(objectives) / sqrt(num_replications)`

.

Many papers suggest terminating the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) is contained within the confidence interval. That is, if `μ - w <= bound <= μ + w`

. Even worse, some papers define an optimization gap of `(μ + w) / bound`

(if minimizing) or `(μ - w) / bound`

(if maximizing), and they terminate once the gap is less than a value like 1%.

Both of these approaches are misleading, and more often than not, they will result in terminating with a sub-optimal policy that performs worse than expected. There are two main reasons for this:

- The half-width depends on the number of replications. To reduce the computational cost, users are often tempted to choose a small number of replications. This increases the half-width and makes it more likely that the algorithm will stop early. But if we choose a large number of replications, then the computational cost is high, and we would have been better off to run a fixed number of iterations and use that computational time to run extra training iterations.
- The confidence interval assumes that the simulated values are normally distributed. In infinite horizon models, this is almost never the case. The distribution is usually closer to exponential or log-normal.

There is a third, more technical reason which relates to the conditional dependence of constructing multiple confidence intervals.

The default value of `z_score = 1.96`

corresponds to a 95% confidence interval. You should interpret the interval as "if we re-run this simulation 100 times, then the true mean will lie in the confidence interval 95 times out of 100." But if the bound is within the confidence interval, then we know the true mean cannot be better than the bound. Therfore, there is a more than 95% chance that the mean is within the interval.

A separate problem arises if we simulate, find that the bound is outside the confidence interval, keep training, and then re-simulate to compute a new confidence interval. Because we will terminate when the bound enters the confidence interval, the repeated construction of a confidence interval means that the unconditional probability that we terminate with a false positive is larger than 5% (there are now more chances that the sample mean is optimistic and that the confidence interval includes the bound but not the true mean). One fix is to simulate with a sequentially increasing number of replicates, so that the unconditional probability stays at 95%, but this runs into the problem of computational cost. For more information on sequential sampling, see, for example, Güzin Bayraksan, David P. Morton, (2011) A Sequential Sampling Procedure for Stochastic Programming. Operations Research 59(4):898-913.

`SDDP.BoundStalling`

— Type`BoundStalling(num_previous_iterations::Int, tolerance::Float64)`

Teriminate the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) fails to improve by more than `tolerance`

in absolute terms for more than `num_previous_iterations`

consecutve iterations, provided it has improved relative to the bound after the first iteration.

Checking for an improvement relative to the first iteration avoids early termination in a situation where the bound fails to improve for the first `N`

iterations. This frequently happens in models with a large number of stages, where it takes time for the cuts to propogate backward enough to modify the bound of the root node.

`SDDP.StoppingChain`

— Type`StoppingChain(rules::AbstractStoppingRule...)`

Terminate once all of the `rules`

are statified.

This stopping rule short-circuits, so subsequent rules are only tested if the previous pass.

**Examples**

A stopping rule that runs 100 iterations, then checks for the bound stalling:

`StoppingChain(IterationLimit(100), BoundStalling(5, 0.1))`

`SDDP.SimulationStoppingRule`

— Type```
SimulationStoppingRule(;
sampling_scheme::AbstractSamplingScheme = SDDP.InSampleMonteCarlo(),
replications::Int = -1,
period::Int = -1,
distance_tol::Float64 = 1e-2,
bound_tol::Float64 = 1e-4,
)
```

Terminate the algorithm using a mix of heuristics. Unless you know otherwise, this is typically a good default.

**Termination criteria**

First, we check that the deterministic bound has stabilized. That is, over the last five iterations, the deterministic bound has changed by less than an absolute or relative tolerance of `bound_tol`

.

Then, if we have not done one in the last `period`

iterations, we perform a primal simulation of the policy using `replications`

out-of-sample realizations from `sampling_scheme`

. The realizations are stored and re-used in each simulation. From each simulation, we record the value of the stage objective. We terminate the policy if each of the trajectories in two consecutive simulations differ by less than `distance_tol`

.

By default, `replications`

and `period`

are `-1`

, and SDDP.jl will guess good values for these. Over-ride the default behavior by setting an appropriate value.

**Example**

`SDDP.train(model; stopping_rules = [SimulationStoppingRule()])`

`SDDP.FirstStageStoppingRule`

— Type`FirstStageStoppingRule(; atol::Float64 = 1e-3, iterations::Int = 50)`

Terminate the algorithm when the outgoing values of the first-stage state variables have not changed by more than `atol`

for `iterations`

number of consecutive iterations.

**Example**

`SDDP.train(model; stopping_rules = [FirstStageStoppingRule()])`

### Sampling schemes

`SDDP.AbstractSamplingScheme`

— Type`AbstractSamplingScheme`

The abstract type for the sampling-scheme interface.

You need to define the following methods:

`SDDP.sample_scenario`

— Function`sample_scenario(graph::PolicyGraph{T}, ::AbstractSamplingScheme) where {T}`

Sample a scenario from the policy graph `graph`

based on the sampling scheme.

Returns `::Tuple{Vector{Tuple{T, <:Any}}, Bool}`

, where the first element is the scenario, and the second element is a Boolean flag indicating if the scenario was terminated due to the detection of a cycle.

The scenario is a list of tuples (type `Vector{Tuple{T, <:Any}}`

) where the first component of each tuple is the index of the node, and the second component is the stagewise-independent noise term observed in that node.

`SDDP.InSampleMonteCarlo`

— Type```
InSampleMonteCarlo(;
max_depth::Int = 0,
terminate_on_cycle::Function = false,
terminate_on_dummy_leaf::Function = true,
rollout_limit::Function = (i::Int) -> typemax(Int),
initial_node::Any = nothing,
)
```

A Monte Carlo sampling scheme using the in-sample data from the policy graph definition.

If `terminate_on_cycle`

, terminate the forward pass once a cycle is detected. If `max_depth > 0`

, return once `max_depth`

nodes have been sampled. If `terminate_on_dummy_leaf`

, terminate the forward pass with 1 - probability of sampling a child node.

Note that if `terminate_on_cycle = false`

and `terminate_on_dummy_leaf = false`

then `max_depth`

must be set > 0.

Control which node the trajectories start from using `initial_node`

. If it is left as `nothing`

, the root node is used as the starting node.

You can use `rollout_limit`

to set iteration specific depth limits. For example:

`InSampleMonteCarlo(rollout_limit = i -> 2 * i)`

`SDDP.OutOfSampleMonteCarlo`

— Type```
OutOfSampleMonteCarlo(
f::Function,
graph::PolicyGraph;
use_insample_transition::Bool = false,
max_depth::Int = 0,
terminate_on_cycle::Bool = false,
terminate_on_dummy_leaf::Bool = true,
rollout_limit::Function = i -> typemax(Int),
initial_node = nothing,
)
```

Create a Monte Carlo sampler using out-of-sample probabilities and/or supports for the stagewise-independent noise terms, and out-of-sample probabilities for the node-transition matrix.

`f`

is a function that takes the name of a node and returns a tuple containing a vector of new `SDDP.Noise`

terms for the children of that node, and a vector of new `SDDP.Noise`

terms for the stagewise-independent noise.

If `f`

is called with the name of the root node (e.g., `0`

in a linear policy graph, `(0, 1)`

in a Markovian Policy Graph), then return a vector of `SDDP.Noise`

for the children of the root node.

If `use_insample_transition`

, the in-sample transition probabilities will be used. Therefore, `f`

should only return a vector of the stagewise-independent noise terms, and `f`

will not be called for the root node.

If `terminate_on_cycle`

, terminate the forward pass once a cycle is detected. If `max_depth > 0`

, return once `max_depth`

nodes have been sampled. If `terminate_on_dummy_leaf`

, terminate the forward pass with 1 - probability of sampling a child node.

Note that if `terminate_on_cycle = false`

and `terminate_on_dummy_leaf = false`

then `max_depth`

must be set > 0.

Control which node the trajectories start from using `initial_node`

. If it is left as `nothing`

, the root node is used as the starting node.

If a node is deterministic, pass `[SDDP.Noise(nothing, 1.0)]`

as the vector of noise terms.

You can use `rollout_limit`

to set iteration specific depth limits. For example:

`OutOfSampleMonteCarlo(rollout_limit = i -> 2 * i)`

**Examples**

Given linear policy graph `graph`

with `T`

stages:

```
sampler = OutOfSampleMonteCarlo(graph) do node
if node == 0
return [SDDP.Noise(1, 1.0)]
else
noise_terms = [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]
children = node < T ? [SDDP.Noise(node + 1, 0.9)] : SDDP.Noise{Int}[]
return children, noise_terms
end
end
```

Given linear policy graph `graph`

with `T`

stages:

```
sampler = OutOfSampleMonteCarlo(graph, use_insample_transition=true) do node
return [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]
end
```

`SDDP.Historical`

— Type```
Historical(
scenarios::Vector{Vector{Tuple{T,S}}},
probability::Vector{Float64};
terminate_on_cycle::Bool = false,
) where {T,S}
```

A sampling scheme that samples a scenario from the vector of scenarios `scenarios`

according to `probability`

.

**Examples**

```
Historical(
[
[(1, 0.5), (2, 1.0), (3, 0.5)],
[(1, 0.5), (2, 0.0), (3, 1.0)],
[(1, 1.0), (2, 0.0), (3, 0.0)]
],
[0.2, 0.5, 0.3],
)
```

```
Historical(
scenarios::Vector{Vector{Tuple{T,S}}};
terminate_on_cycle::Bool = false,
) where {T,S}
```

A deterministic sampling scheme that iterates through the vector of provided `scenarios`

.

**Examples**

```
Historical([
[(1, 0.5), (2, 1.0), (3, 0.5)],
[(1, 0.5), (2, 0.0), (3, 1.0)],
[(1, 1.0), (2, 0.0), (3, 0.0)],
])
```

```
Historical(
scenario::Vector{Tuple{T,S}};
terminate_on_cycle::Bool = false,
) where {T,S}
```

A deterministic sampling scheme that always samples `scenario`

.

**Examples**

`Historical([(1, 0.5), (2, 1.5), (3, 0.75)])`

`SDDP.PSRSamplingScheme`

— Type`PSRSamplingScheme(N::Int; sampling_scheme = InSampleMonteCarlo())`

A sampling scheme with `N`

scenarios, similar to how PSR does it.

`SDDP.SimulatorSamplingScheme`

— Type`SimulatorSamplingScheme(simulator::Function)`

Create a sampling scheme based on a univariate scenario generator `simulator`

, which returns a `Vector{Float64}`

when called with no arguments like `simulator()`

.

This sampling scheme must be used with a Markovian graph constructed from the same `simulator`

.

The sample space for `SDDP.parameterize`

must be a tuple with 1 or 2 values, value is the Markov state and the second value is the random variable for the current node. If the node is deterministic, use `Ω = [(markov_state,)]`

.

This sampling scheme generates a new scenario by calling `simulator()`

, and then picking the sequence of nodes in the Markovian graph that is closest to the new trajectory.

**Example**

```
julia> using SDDP
julia> import HiGHS
julia> simulator() = cumsum(rand(10))
simulator (generic function with 1 method)
julia> model = SDDP.PolicyGraph(
SDDP.MarkovianGraph(simulator; budget = 20, scenarios = 100);
sense = :Max,
upper_bound = 12,
optimizer = HiGHS.Optimizer,
) do sp, node
t, markov_state = node
@variable(sp, x >= 0, SDDP.State, initial_value = 1)
@variable(sp, u >= 0)
@constraint(sp, x.out == x.in - u)
# Elements of Ω MUST be a tuple in which `markov_state` is the first
# element.
Ω = [(markov_state, (u = u_max,)) for u_max in (0.0, 0.5)]
SDDP.parameterize(sp, Ω) do (markov_state, ω)
set_upper_bound(u, ω.u)
@stageobjective(sp, markov_state * u)
end
end;
julia> SDDP.train(
model;
print_level = 0,
iteration_limit = 10,
sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),
)
```

### Parallel schemes

`SDDP.AbstractParallelScheme`

— Type`AbstractParallelScheme`

Abstract type for different parallelism schemes.

`SDDP.Serial`

— Type`Serial()`

Run SDDP in serial mode.

`SDDP.Threaded`

— Type`Threaded()`

Run SDDP in multi-threaded mode.

Use `julia --threads N`

to start Julia with `N`

threads. In most cases, you should pick `N`

to be the number of physical cores on your machine.

This plug-in is experimental, and parts of SDDP.jl may not be threadsafe. If you encounter any problems or crashes, please open a GitHub issue.

**Example**

```
SDDP.train(model; parallel_scheme = SDDP.Threaded())
SDDP.simulate(model; parallel_scheme = SDDP.Threaded())
```

`SDDP.Asynchronous`

— Type```
Asynchronous(
[init_callback::Function,]
slave_pids::Vector{Int} = workers();
use_master::Bool = true,
)
```

Run SDDP in asynchronous mode workers with pid's `slave_pids`

.

After initializing the models on each worker, call `init_callback(model)`

. Note that `init_callback`

is run *locally on the worker* and *not* on the master thread.

If `use_master`

is `true`

, iterations are also conducted on the master process.

```
Asynchronous(
solver::Any,
slave_pids::Vector{Int} = workers();
use_master::Bool = true,
)
```

Run SDDP in asynchronous mode workers with pid's `slave_pids`

.

Set the optimizer on each worker by calling `JuMP.set_optimizer(model, solver)`

.

### Forward passes

`SDDP.AbstractForwardPass`

— Type`AbstractForwardPass`

Abstract type for different forward passes.

`SDDP.DefaultForwardPass`

— Type`DefaultForwardPass(; include_last_node::Bool = true)`

The default forward pass.

If `include_last_node = false`

and the sample terminated due to a cycle, then the last node (which forms the cycle) is omitted. This can be useful option to set when training, but it comes at the cost of not knowing which node formed the cycle (if there are multiple possibilities).

`SDDP.RevisitingForwardPass`

— Type```
RevisitingForwardPass(
period::Int = 500;
sub_pass::AbstractForwardPass = DefaultForwardPass(),
)
```

A forward pass scheme that generate `period`

new forward passes (using `sub_pass`

), then revisits all previously explored forward passes. This can be useful to encourage convergence at a diversity of points in the state-space.

Set `period = typemax(Int)`

to disable.

For example, if `period = 2`

, then the forward passes will be revisited as follows: `1, 2, 1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 1, 2, ...`

.

`SDDP.RiskAdjustedForwardPass`

— Type```
RiskAdjustedForwardPass(;
forward_pass::AbstractForwardPass,
risk_measure::AbstractRiskMeasure,
resampling_probability::Float64,
rejection_count::Int = 5,
)
```

A forward pass that resamples a previous forward pass with `resampling_probability`

probability, and otherwise samples a new forward pass using `forward_pass`

.

The forward pass to revisit is chosen based on the risk-adjusted (using `risk_measure`

) probability of the cumulative stage objectives.

Note that this objective corresponds to the *first* time we visited the trajectory. Subsequent visits may have improved things, but we don't have the mechanisms in-place to update it. Therefore, remove the forward pass from resampling consideration after `rejection_count`

revisits.

`SDDP.AlternativeForwardPass`

— Type```
AlternativeForwardPass(
forward_model::SDDP.PolicyGraph{T};
forward_pass::AbstractForwardPass = DefaultForwardPass(),
)
```

A forward pass that simulates using `forward_model`

, which may be different to the model used in the backwards pass.

When using this forward pass, you should almost always pass `SDDP.AlternativePostIterationCallback`

to the `post_iteration_callback`

argument of `SDDP.train`

.

This forward pass is most useful when the `forward_model`

is non-convex and we use a convex approximation of the model in the backward pass.

For example, in optimal power flow models, we can use an AC-OPF formulation as the `forward_model`

and a DC-OPF formulation as the backward model.

For more details see the paper:

Rosemberg, A., and Street, A., and Garcia, J.D., and Valladão, D.M., and Silva, T., and Dowson, O. (2021). Assessing the cost of network simplifications in long-term hydrothermal dispatch planning models. IEEE Transactions on Sustainable Energy. 13(1), 196-206.

`SDDP.AlternativePostIterationCallback`

— Type`AlternativePostIterationCallback(forward_model::PolicyGraph)`

A post-iteration callback that should be used whenever `SDDP.AlternativeForwardPass`

is used.

`SDDP.RegularizedForwardPass`

— Type```
RegularizedForwardPass(;
rho::Float64 = 0.05,
forward_pass::AbstractForwardPass = DefaultForwardPass(),
)
```

A forward pass that regularizes the outgoing first-stage state variables with an L-infty trust-region constraint about the previous iteration's solution. Specifically, the bounds of the outgoing state variable `x`

are updated from `(l, u)`

to `max(l, x^k - rho * (u - l)) <= x <= min(u, x^k + rho * (u - l))`

, where `x^k`

is the optimal solution of `x`

in the previous iteration. On the first iteration, the value of the state at the root node is used.

By default, `rho`

is set to 5%, which seems to work well empirically.

Pass a different `forward_pass`

to control the forward pass within the regularized forward pass.

This forward pass is largely intended to be used for investment problems in which the first stage makes a series of capacity decisions that then influence the rest of the graph. An error is thrown if the first stage problem is not deterministic, and states are silently skipped if they do not have finite bounds.

### Risk Measures

`SDDP.AbstractRiskMeasure`

— Type`AbstractRiskMeasure`

The abstract type for the risk measure interface.

You need to define the following methods:

`SDDP.adjust_probability`

— Function```
adjust_probability(
measure::Expectation
risk_adjusted_probability::Vector{Float64},
original_probability::Vector{Float64},
noise_support::Vector{Noise{T}},
objective_realizations::Vector{Float64},
is_minimization::Bool,
) where {T}
```

### Duality handlers

`SDDP.AbstractDualityHandler`

— Type`AbstractDualityHandler`

The abstract type for the duality handler interface.

`SDDP.ContinuousConicDuality`

— Type`ContinuousConicDuality()`

Compute dual variables in the backward pass using conic duality, relaxing any binary or integer restrictions as necessary.

**Theory**

Given the problem

```
min Cᵢ(x̄, u, w) + θᵢ
st (x̄, x′, u) in Xᵢ(w) ∩ S
x̄ - x == 0 [λ]
```

where `S ⊆ ℝ×ℤ`

, we relax integrality and using conic duality to solve for `λ`

in the problem:

```
min Cᵢ(x̄, u, w) + θᵢ
st (x̄, x′, u) in Xᵢ(w)
x̄ - x == 0 [λ]
```

`SDDP.LagrangianDuality`

— Type```
LagrangianDuality(;
method::LocalImprovementSearch.AbstractSearchMethod =
LocalImprovementSearch.BFGS(100),
)
```

Obtain dual variables in the backward pass using Lagrangian duality.

**Arguments**

`method`

: the`LocalImprovementSearch`

method for maximizing the Lagrangian dual problem.

**Theory**

Given the problem

```
min Cᵢ(x̄, u, w) + θᵢ
st (x̄, x′, u) in Xᵢ(w) ∩ S
x̄ - x == 0 [λ]
```

where `S ⊆ ℝ×ℤ`

, we solve the problem `max L(λ)`

, where:

```
L(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' h(x̄)
st (x̄, x′, u) in Xᵢ(w) ∩ S
```

and where `h(x̄) = x̄ - x`

.

`SDDP.StrengthenedConicDuality`

— Type`StrengthenedConicDuality()`

Obtain dual variables in the backward pass using strengthened conic duality.

**Theory**

Given the problem

```
min Cᵢ(x̄, u, w) + θᵢ
st (x̄, x′, u) in Xᵢ(w) ∩ S
x̄ - x == 0 [λ]
```

we first obtain an estimate for `λ`

using `ContinuousConicDuality`

.

Then, we evaluate the Lagrangian function:

```
L(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' (x̄ - x`)
st (x̄, x′, u) in Xᵢ(w) ∩ S
```

to obtain a better estimate of the intercept.

`SDDP.BanditDuality`

— Type`BanditDuality()`

Formulates the problem of choosing a duality handler as a multi-armed bandit problem. The arms to choose between are:

Our problem isn't a typical multi-armed bandit for a two reasons:

- The reward distribution is non-stationary (each arm converges to 0 as it keeps getting pulled.
- The distribution of rewards is dependent on the history of the arms that were chosen.

We choose a very simple heuristic: pick the arm with the best mean + 1 standard deviation. That should ensure we consistently pick the arm with the best likelihood of improving the value function.

In future, we should consider discounting the rewards of earlier iterations, and focus more on the more-recent rewards.

## Simulating the policy

`SDDP.simulate`

— Function```
simulate(
model::PolicyGraph,
number_replications::Int = 1,
variables::Vector{Symbol} = Symbol[];
sampling_scheme::AbstractSamplingScheme =
InSampleMonteCarlo(),
custom_recorders = Dict{Symbol, Function}(),
duality_handler::Union{Nothing,AbstractDualityHandler} = nothing,
skip_undefined_variables::Bool = false,
parallel_scheme::AbstractParallelScheme = Serial(),
incoming_state::Dict{String,Float64} = _initial_state(model),
)::Vector{Vector{Dict{Symbol,Any}}}
```

Perform a simulation of the policy model with `number_replications`

replications.

**Return data structure**

Returns a vector with one element for each replication. Each element is a vector with one-element for each node in the scenario that was sampled. Each element in that vector is a dictionary containing information about the subproblem that was solved.

In that dictionary there are four special keys:

`:node_index`

, which records the index of the sampled node in the policy model`:noise_term`

, which records the noise observed at the node`:stage_objective`

, which records the stage-objective of the subproblem`:bellman_term`

, which records the cost/value-to-go of the node.

The sum of `:stage_objective + :bellman_term`

will equal the objective value of the solved subproblem.

In addition to the special keys, the dictionary will contain the result of `key => JuMP.value(subproblem[key])`

for each `key`

in `variables`

. This is useful to obtain the primal value of the state and control variables.

**Positonal arguments**

`model`

: the model to simulate`number_replications::Int = 1`

: the number of simulation replications to conduct, that is, the length of the simulation vector that is returned by this function. If omitted, this defaults to`1`

.``variables::Vector{Symbol} = Symbol[]`

: a list of the variable names to record the value of in each stage.

**Keyword arguments**

`sampling_scheme`

: the sampling scheme used when simulating.`custom_recorders`

: see`Custom recorders`

section below.`duality_handler`

: the`SDDP.AbstractDualityHandler`

used to compute dual variables. If you do not require dual variables (or if they are not available), pass`duality_handler = nothing`

.`skip_undefined_variables`

: If you attempt to simulate the value of a variable that is only defined in some of the stage problems, an error will be thrown. To over-ride this (and return a`NaN`

instead), pass`skip_undefined_variables = true`

.`parallel_scheme`

: Use`parallel_scheme::[AbstractParallelScheme](@ref)`

to specify a scheme for simulating in parallel. Defaults to`Serial`

.`initial_state`

: Use`incoming_state`

to pass an initial value of the state variable, if it differs from that at the root node. Each key should be the string name of the state variable.

**Custom recorders**

For more complicated data, the `custom_recorders`

keyword argument can be used.

For example, to record the dual of a constraint named `my_constraint`

, pass the following:

```
simulation_results = SDDP.simulate(model, 2;
custom_recorders = Dict{Symbol, Function}(
:constraint_dual => sp -> JuMP.dual(sp[:my_constraint])
)
)
```

The value of the dual in the first stage of the second replication can be accessed as:

`simulation_results[2][1][:constraint_dual]`

`SDDP.calculate_bound`

— Function```
SDDP.calculate_bound(
model::PolicyGraph,
state::Dict{Symbol,Float64} = model.initial_root_state;
risk_measure::AbstractRiskMeasure = Expectation(),
)
```

Calculate the lower bound (if minimizing, otherwise upper bound) of the problem model at the point state, assuming the risk measure at the root node is risk_measure.

`SDDP.add_all_cuts`

— Function`add_all_cuts(model::PolicyGraph)`

Add all cuts that may have been deleted back into the model.

**Explanation**

During the solve, SDDP.jl may decide to remove cuts for a variety of reasons.

These can include cuts that define the optimal value function, particularly around the extremes of the state-space (e.g., reservoirs empty).

This function ensures that all cuts discovered are added back into the model.

## Decision rules

`SDDP.DecisionRule`

— Type`DecisionRule(model::PolicyGraph{T}; node::T)`

Create a decision rule for node `node`

in `model`

.

**Example**

`rule = SDDP.DecisionRule(model; node = 1)`

`SDDP.evaluate`

— Function```
evaluate(
rule::DecisionRule;
incoming_state::Dict{Symbol,Float64},
noise = nothing,
controls_to_record = Symbol[],
)
```

Evalute the decision rule `rule`

at the point described by the `incoming_state`

and `noise`

.

If the node is deterministic, omit the `noise`

argument.

Pass a list of symbols to `controls_to_record`

to save the optimal primal solution corresponding to the names registered in the model.

```
evaluate(
V::ValueFunction,
point::Dict{Union{Symbol,String},<:Real}
objective_state = nothing,
belief_state = nothing
)
```

Evaluate the value function `V`

at `point`

in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

**Examples**

`evaluate(V, Dict(:volume => 1.0))`

If the state variable is constructed like `@variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0)`

, use `[i]`

to index the state variable:

`evaluate(V, Dict(Symbol("volume[1]") => 1.0))`

You can also use strings or symbols for the keys.

`evaluate(V, Dict("volume[1]" => 1))`

`evalute(V::ValueFunction{Nothing, Nothing}; kwargs...)`

Evalute the value function `V`

at the point in the state-space specified by `kwargs`

.

**Examples**

`evaluate(V; volume = 1)`

```
evaluate(
model::PolicyGraph{T},
validation_scenarios::ValidationScenarios{T,S},
) where {T,S}
```

Evaluate the performance of the policy contained in `model`

after a call to `train`

on the scenarios specified by `validation_scenarios`

.

**Examples**

```
model, validation_scenarios = read_from_file("my_model.sof.json")
train(model; iteration_limit = 100)
simulations = evaluate(model, validation_scenarios)
```

## Visualizing the policy

`SDDP.SpaghettiPlot`

— Type`SDDP.SpaghettiPlot(; stages, scenarios)`

Initialize a new `SpaghettiPlot`

with `stages`

stages and `scenarios`

number of replications.

`SDDP.add_spaghetti`

— Function`SDDP.add_spaghetti(data_function::Function, plt::SpaghettiPlot; kwargs...)`

**Description**

Add a new figure to the SpaghettiPlot `plt`

, where the y-value of the `scenario`

th line when x = `stage`

is given by `data_function(plt.simulations[scenario][stage])`

.

**Keyword arguments**

`xlabel`

: set the xaxis label`ylabel`

: set the yaxis label`title`

: set the title of the plot`ymin`

: set the minimum y value`ymax`

: set the maximum y value`cumulative`

: plot the additive accumulation of the value across the stages`interpolate`

: interpolation method for lines between stages.

Defaults to `"linear"`

see the d3 docs for all options.

**Examples**

```
simulations = simulate(model, 10)
plt = SDDP.spaghetti_plot(simulations)
SDDP.add_spaghetti(plt; title = "Stage objective") do data
return data[:stage_objective]
end
```

`SDDP.publication_plot`

— Function```
SDDP.publication_plot(
data_function, simulations;
quantile = [0.0, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0],
kwargs...)
```

Create a `Plots.jl`

recipe plot of the simulations.

See `Plots.jl`

for the list of keyword arguments.

**Examples**

```
SDDP.publication_plot(simulations; title = "My title") do data
return data[:stage_objective]
end
```

`SDDP.ValueFunction`

— Type`ValueFunction`

A representation of the value function. SDDP.jl uses the following unique representation of the value function that is undocumented in the literature.

It supports three types of state variables:

- x - convex "resource" states
- b - concave "belief" states
- y - concave "objective" states

In addition, we have three types of cuts:

- Single-cuts (also called "average" cuts in the literature), which involve the risk-adjusted expectation of the cost-to-go.
- Multi-cuts, which use a different cost-to-go term for each realization w.
- Risk-cuts, which correspond to the facets of the dual interpretation of a coherent risk measure.

Therefore, ValueFunction returns a JuMP model of the following form:

```
V(x, b, y) = min: μᵀb + νᵀy + θ
s.t. # "Single" / "Average" cuts
μᵀb(j) + νᵀy(j) + θ >= α(j) + xᵀβ(j), ∀ j ∈ J
# "Multi" cuts
μᵀb(k) + νᵀy(k) + φ(w) >= α(k, w) + xᵀβ(k, w), ∀w ∈ Ω, k ∈ K
# "Risk-set" cuts
θ ≥ Σ{p(k, w) * φ(w)}_w - μᵀb(k) - νᵀy(k), ∀ k ∈ K
```

`SDDP.evaluate`

— Method```
evaluate(
V::ValueFunction,
point::Dict{Union{Symbol,String},<:Real}
objective_state = nothing,
belief_state = nothing
)
```

Evaluate the value function `V`

at `point`

in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

**Examples**

`evaluate(V, Dict(:volume => 1.0))`

If the state variable is constructed like `@variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0)`

, use `[i]`

to index the state variable:

`evaluate(V, Dict(Symbol("volume[1]") => 1.0))`

You can also use strings or symbols for the keys.

`evaluate(V, Dict("volume[1]" => 1))`

`SDDP.plot`

— Function`plot(plt::SpaghettiPlot[, filename::String]; open::Bool = true)`

The SpaghettiPlot plot `plt`

to `filename`

. If `filename`

is not given, it will be saved to a temporary directory. If `open = true`

, then a browser window will be opened to display the resulting HTML file.

## Debugging the model

`SDDP.write_subproblem_to_file`

— Function```
write_subproblem_to_file(
node::Node,
filename::String;
throw_error::Bool = false,
)
```

Write the subproblem contained in `node`

to the file `filename`

.

The `throw_error`

is an argument used internally by SDDP.jl. If set, an error will be thrown.

**Example**

`SDDP.write_subproblem_to_file(model[1], "subproblem_1.lp")`

`SDDP.deterministic_equivalent`

— Function```
deterministic_equivalent(
pg::PolicyGraph{T},
optimizer = nothing;
time_limit::Union{Real,Nothing} = 60.0,
)
```

Form a JuMP model that represents the deterministic equivalent of the problem.

**Examples**

`deterministic_equivalent(model)`

`deterministic_equivalent(model, HiGHS.Optimizer)`

## StochOptFormat

`SDDP.write_to_file`

— Function```
write_to_file(
model::PolicyGraph,
filename::String;
compression::MOI.FileFormats.AbstractCompressionScheme =
MOI.FileFormats.AutomaticCompression(),
kwargs...
)
```

Write `model`

to `filename`

in the StochOptFormat file format.

Pass an argument to `compression`

to override the default of automatically detecting the file compression to use based on the extension of `filename`

.

See `Base.write(::IO, ::PolicyGraph)`

for information on the keyword arguments that can be provided.

This function is experimental. See the full warning in `Base.write(::IO, ::PolicyGraph)`

.

**Examples**

`write_to_file(model, "my_model.sof.json"; validation_scenarios = 10)`

`SDDP.read_from_file`

— Function```
read_from_file(
filename::String;
compression::MOI.FileFormats.AbstractCompressionScheme =
MOI.FileFormats.AutomaticCompression(),
kwargs...
)::Tuple{PolicyGraph, ValidationScenarios}
```

Return a tuple containing a `PolicyGraph`

object and a `ValidationScenarios`

read from `filename`

in the StochOptFormat file format.

Pass an argument to `compression`

to override the default of automatically detecting the file compression to use based on the extension of `filename`

.

See `Base.read(::IO, ::Type{PolicyGraph})`

for information on the keyword arguments that can be provided.

This function is experimental. See the full warning in `Base.read(::IO, ::Type{PolicyGraph})`

.

**Examples**

`model, validation_scenarios = read_from_file("my_model.sof.json")`

`Base.write`

— Method```
Base.write(
io::IO,
model::PolicyGraph;
validation_scenarios::Union{Nothing,Int,ValidationScenarios} = nothing,
sampling_scheme::AbstractSamplingScheme = InSampleMonteCarlo(),
kwargs...
)
```

Write `model`

to `io`

in the StochOptFormat file format.

Pass an `Int`

to `validation_scenarios`

(default `nothing`

) to specify the number of test scenarios to generate using the `sampling_scheme`

sampling scheme. Alternatively, pass a `ValidationScenarios`

object to manually specify the test scenarios to use.

Any additional `kwargs`

passed to `write`

will be stored in the top-level of the resulting StochOptFormat file. Valid arguments include `name`

, `author`

, `date`

, and `description`

.

**Compatibility**

THIS FUNCTION IS EXPERIMENTAL. THINGS MAY CHANGE BETWEEN COMMITS. YOU SHOULD NOT RELY ON THIS FUNCTIONALITY AS A LONG-TERM FILE FORMAT (YET).

In addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:

`JuMP.fix`

`JuMP.set_lower_bound`

`JuMP.set_upper_bound`

`JuMP.set_normalized_rhs`

- Changes to the constant or affine terms in a stage objective.

If your model uses something other than this, this function will silently write an incorrect formulation of the problem.

**Examples**

```
open("my_model.sof.json", "w") do io
write(
io,
model;
validation_scenarios = 10,
name = "MyModel",
author = "@odow",
date = "2020-07-20",
description = "Example problem for the SDDP.jl documentation",
)
end
```

`Base.read`

— Method```
Base.read(
io::IO,
::Type{PolicyGraph};
bound::Float64 = 1e6,
)::Tuple{PolicyGraph,ValidationScenarios}
```

Return a tuple containing a `PolicyGraph`

object and a `ValidationScenarios`

read from `io`

in the StochOptFormat file format.

See also: `evaluate`

.

**Compatibility**

This function is experimental. Things may change between commits. You should not rely on this functionality as a long-term file format (yet).

In addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:

- Additive random variables in the constraints or in the objective
- Multiplicative random variables in the objective

If your model uses something other than this, this function may throw an error or silently build a non-convex model.

**Examples**

```
open("my_model.sof.json", "r") do io
model, validation_scenarios = read(io, PolicyGraph)
end
```

`SDDP.evaluate`

— Method```
evaluate(
model::PolicyGraph{T},
validation_scenarios::ValidationScenarios{T,S},
) where {T,S}
```

Evaluate the performance of the policy contained in `model`

after a call to `train`

on the scenarios specified by `validation_scenarios`

.

**Examples**

```
model, validation_scenarios = read_from_file("my_model.sof.json")
train(model; iteration_limit = 100)
simulations = evaluate(model, validation_scenarios)
```

`SDDP.ValidationScenarios`

— Type`ValidationScenario{T,S}(scenarios::Vector{ValidationScenario{T,S}})`

An `AbstractSamplingScheme`

based on a vector of scenarios.

Each scenario is a vector of `Tuple{T, S}`

where the first element is the node to visit and the second element is the realization of the stagewise-independent noise term. Pass `nothing`

if the node is deterministic.

`SDDP.ValidationScenario`

— Type`ValidationScenario{T,S}(scenario::Vector{Tuple{T,S}})`

A single scenario for testing.

See also: `ValidationScenarios`

.