API Reference

This page lists the public API of SDDP.jl. Any functions in SDDP that are not listed here are considered part of the private API and may change in any future release.

Info

This page is a semi-structured list of the SDDP.jl API. For a more structured overview, read the How-to guides or Tutorial parts of this documentation.

Load SDDP using:

using SDDP

SDDP exports only @stageobjective. Therefore, all other calls must be prefixed with SDDP..

Graph

SDDP.GraphType
Graph(root_node::T) where T

Create an empty graph struture with the root node root_node.

Example

julia> graph = SDDP.Graph(0)
Root
 0
Nodes
 {}
Arcs
 {}

julia> graph = SDDP.Graph(:root)
Root
 root
Nodes
 {}
Arcs
 {}

julia> graph = SDDP.Graph((0, 0))
Root
 (0, 0)
Nodes
 {}
Arcs
 {}
source

add_node

SDDP.add_nodeFunction
add_node(graph::Graph{T}, node::T) where {T}

Add a node to the graph graph.

Examples

julia> graph = SDDP.Graph(:root);

julia> SDDP.add_node(graph, :A)

julia> graph
Root
 root
Nodes
 A
Arcs
 {}
julia> graph = SDDP.Graph(0);

julia> SDDP.add_node(graph, 2)

julia> graph
Root
 0
Nodes
 2
Arcs
 {}
source

add_edge

SDDP.add_edgeFunction
add_edge(graph::Graph{T}, edge::Pair{T, T}, probability::Float64) where {T}

Add an edge to the graph graph.

Examples

julia> graph = SDDP.Graph(0);

julia> SDDP.add_node(graph, 1)

julia> SDDP.add_edge(graph, 0 => 1, 0.9)

julia> graph
Root
 0
Nodes
 1
Arcs
 0 => 1 w.p. 0.9
julia> graph = SDDP.Graph(:root);

julia> SDDP.add_node(graph, :A)

julia> SDDP.add_edge(graph, :root => :A, 1.0)

julia> graph
Root
 root
Nodes
 A
Arcs
 root => A w.p. 1.0
source

add_ambiguity_set

SDDP.add_ambiguity_setFunction
add_ambiguity_set(
    graph::Graph{T},
    set::Vector{T},
    lipschitz::Vector{Float64},
) where {T}

Add set to the belief partition of graph.

lipschitz is a vector of Lipschitz constants, with one element for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.

Examples

julia> graph = SDDP.LinearGraph(3)
Root
 0
Nodes
 1
 2
 3
Arcs
 0 => 1 w.p. 1.0
 1 => 2 w.p. 1.0
 2 => 3 w.p. 1.0

julia> SDDP.add_ambiguity_set(graph, [1, 2], [1e3, 1e2])

julia> SDDP.add_ambiguity_set(graph, [3], [1e5])

julia> graph
Root
 0
Nodes
 1
 2
 3
Arcs
 0 => 1 w.p. 1.0
 1 => 2 w.p. 1.0
 2 => 3 w.p. 1.0
Partitions
 {1, 2}
 {3}
source
add_ambiguity_set(graph::Graph{T}, set::Vector{T}, lipschitz::Float64)

Add set to the belief partition of graph.

lipschitz is a Lipschitz constant for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.

Examples

julia> graph = SDDP.LinearGraph(3);

julia> SDDP.add_ambiguity_set(graph, [1, 2], 1e3)

julia> SDDP.add_ambiguity_set(graph, [3], 1e5)

julia> graph
Root
 0
Nodes
 1
 2
 3
Arcs
 0 => 1 w.p. 1.0
 1 => 2 w.p. 1.0
 2 => 3 w.p. 1.0
Partitions
 {1, 2}
 {3}
source

LinearGraph

SDDP.LinearGraphFunction
LinearGraph(stages::Int)

Create a linear graph with stages number of nodes.

Examples

julia> graph = SDDP.LinearGraph(3)
Root
 0
Nodes
 1
 2
 3
Arcs
 0 => 1 w.p. 1.0
 1 => 2 w.p. 1.0
 2 => 3 w.p. 1.0
source

MarkovianGraph

SDDP.MarkovianGraphFunction
MarkovianGraph(transition_matrices::Vector{Matrix{Float64}})

Construct a Markovian graph from the vector of transition matrices.

transition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t.

The dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.

Examples

julia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]])
Root
 (0, 1)
Nodes
 (1, 1)
 (2, 1)
 (2, 2)
 (3, 1)
 (3, 2)
Arcs
 (0, 1) => (1, 1) w.p. 1.0
 (1, 1) => (2, 1) w.p. 0.5
 (1, 1) => (2, 2) w.p. 0.5
 (2, 1) => (3, 1) w.p. 0.8
 (2, 1) => (3, 2) w.p. 0.2
 (2, 2) => (3, 1) w.p. 0.2
 (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(;
    stages::Int,
    transition_matrix::Matrix{Float64},
    root_node_transition::Vector{Float64},
)

Construct a Markovian graph object with stages number of stages and time-independent Markov transition probabilities.

transition_matrix must be a square matrix, and the probability of transitioning from Markov state i in stage t to Markov state j in stage t + 1 is given by transition_matrix[i, j].

root_node_transition[i] is the probability of transitioning from the root node to Markov state i in the first stage.

Examples

julia> graph = SDDP.MarkovianGraph(;
           stages = 3,
           transition_matrix = [0.8 0.2; 0.2 0.8],
           root_node_transition = [0.5, 0.5],
       )
Root
 (0, 1)
Nodes
 (1, 1)
 (1, 2)
 (2, 1)
 (2, 2)
 (3, 1)
 (3, 2)
Arcs
 (0, 1) => (1, 1) w.p. 0.5
 (0, 1) => (1, 2) w.p. 0.5
 (1, 1) => (2, 1) w.p. 0.8
 (1, 1) => (2, 2) w.p. 0.2
 (1, 2) => (2, 1) w.p. 0.2
 (1, 2) => (2, 2) w.p. 0.8
 (2, 1) => (3, 1) w.p. 0.8
 (2, 1) => (3, 2) w.p. 0.2
 (2, 2) => (3, 1) w.p. 0.2
 (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(
    simulator::Function;
    budget::Union{Int,Vector{Int}},
    scenarios::Int = 1000,
)

Construct a Markovian graph by fitting Markov chain to scenarios generated by simulator().

budget is the total number of nodes in the resulting Markov chain. This can either be specified as a single Int, in which case we will attempt to intelligently distributed the nodes between stages. Alternatively, budget can be a Vector{Int}, which details the number of Markov state to have in each stage.

source

UnicyclicGraph

SDDP.UnicyclicGraphFunction
UnicyclicGraph(discount_factor::Float64; num_nodes::Int = 1)

Construct a graph composed of num_nodes nodes that form a single cycle, with a probability of discount_factor of continuing the cycle.

Examples

julia> graph = SDDP.UnicyclicGraph(0.9; num_nodes = 2)
Root
 0
Nodes
 1
 2
Arcs
 0 => 1 w.p. 1.0
 1 => 2 w.p. 1.0
 2 => 1 w.p. 0.9
source

LinearPolicyGraph

SDDP.LinearPolicyGraphFunction
LinearPolicyGraph(builder::Function; stages::Int, kwargs...)

Create a linear policy graph with stages number of stages.

Keyword arguments

  • stages: the number of stages in the graph

  • kwargs: other keyword arguments are passed to SDDP.PolicyGraph.

Examples

julia> SDDP.LinearPolicyGraph(; stages = 2, lower_bound = 0.0) do sp, t
    # ... build model ...
end
A policy graph with 2 nodes.
Node indices: 1, 2

is equivalent to

julia> graph = SDDP.LinearGraph(2);

julia> SDDP.PolicyGraph(graph; lower_bound = 0.0) do sp, t
    # ... build model ...
end
A policy graph with 2 nodes.
Node indices: 1, 2
source

MarkovianPolicyGraph

SDDP.MarkovianPolicyGraphFunction
MarkovianPolicyGraph(
    builder::Function;
    transition_matrices::Vector{Array{Float64,2}},
    kwargs...
)

Create a Markovian policy graph based on the transition matrices given in transition_matrices.

Keyword arguments

  • transition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t. The dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.

  • kwargs: other keyword arguments are passed to SDDP.PolicyGraph.

See also

See SDDP.MarkovianGraph for other ways of specifying a Markovian policy graph.

See SDDP.PolicyGraph for the other keyword arguments.

Examples

julia> SDDP.MarkovianPolicyGraph(;
           transition_matrices = [ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]],
           lower_bound = 0.0,
       ) do sp, node
           # ... build model ...
       end
A policy graph with 5 nodes.
 Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)

is equivalent to

julia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]]);

julia> SDDP.PolicyGraph(graph; lower_bound = 0.0) do sp, t
    # ... build model ...
end
A policy graph with 5 nodes.
 Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)
source

PolicyGraph

SDDP.PolicyGraphType
PolicyGraph(
    builder::Function,
    graph::Graph{T};
    sense::Symbol = :Min,
    lower_bound = -Inf,
    upper_bound = Inf,
    optimizer = nothing,
) where {T}

Construct a policy graph based on the graph structure of graph. (See SDDP.Graph for details.)

Keyword arguments

  • sense: whether we are minimizing (:Min) or maximizing (:Max).

  • lower_bound: if mimimizing, a valid lower bound for the cost to go in all subproblems.

  • upper_bound: if maximizing, a valid upper bound for the value to go in all subproblems.

  • optimizer: the optimizer to use for each of the subproblems

Examples

function builder(subproblem::JuMP.Model, index)
    # ... subproblem definition ...
end

model = PolicyGraph(
    builder,
    graph;
    lower_bound = 0.0,
    optimizer = HiGHS.Optimizer,
)

Or, using the Julia do ... end syntax:

model = PolicyGraph(
    graph;
    lower_bound = 0.0,
    optimizer = HiGHS.Optimizer,
) do subproblem, index
    # ... subproblem definitions ...
end
source

@stageobjective

SDDP.@stageobjectiveMacro
@stageobjective(subproblem, expr)

Set the stage-objective of subproblem to expr.

Examples

@stageobjective(subproblem, 2x + y)
source

parameterize

SDDP.parameterizeFunction
parameterize(
    modify::Function,
    subproblem::JuMP.Model,
    realizations::Vector{T},
    probability::Vector{Float64} = fill(1.0 / length(realizations))
) where {T}

Add a parameterization function modify to subproblem. The modify function takes one argument and modifies subproblem based on the realization of the noise sampled from realizations with corresponding probabilities probability.

In order to conduct an out-of-sample simulation, modify should accept arguments that are not in realizations (but still of type T).

Examples

SDDP.parameterize(subproblem, [1, 2, 3], [0.4, 0.3, 0.3]) do ω
    JuMP.set_upper_bound(x, ω)
end
source
parameterize(node::Node, noise)

Parameterize node node with the noise noise.

source

add_objective_state

SDDP.add_objective_stateFunction
add_objective_state(update::Function, subproblem::JuMP.Model; kwargs...)

Add an objective state variable to subproblem.

Required kwargs are:

  • initial_value: The initial value of the objective state variable at the root node.
  • lipschitz: The lipschitz constant of the objective state variable.

Setting a tight value for the lipschitz constant can significantly improve the speed of convergence.

Optional kwargs are:

  • lower_bound: A valid lower bound for the objective state variable. Can be -Inf.
  • upper_bound: A valid upper bound for the objective state variable. Can be +Inf.

Setting tight values for these optional variables can significantly improve the speed of convergence.

If the objective state is N-dimensional, each keyword argument must be an NTuple{N,Float64}. For example, initial_value = (0.0, 1.0).

source

objective_state

Noise

SDDP.NoiseType
Noise(support, probability)

An atom of a discrete random variable at the point of support support and associated probability probability.

source

numerical_stability_report

SDDP.numerical_stability_reportFunction
numerical_stability_report(
    [io::IO = stdout,]
    model::PolicyGraph;
    by_node::Bool = false,
    print::Bool = true,
    warn::Bool = true,
)

Print a report identifying possible numeric stability issues.

Keyword arguments

  • If by_node, print a report for each node in the graph.

  • If print, print to io.

  • If warn, warn if the coefficients may cause numerical issues.

source

train

SDDP.trainFunction
SDDP.train(model::PolicyGraph; kwargs...)

Train the policy for model.

Keyword arguments

  • iteration_limit::Int: number of iterations to conduct before termination.

  • time_limit::Float64: number of seconds to train before termination.

  • stoping_rules: a vector of SDDP.AbstractStoppingRules. Defaults to SimulationStoppingRule.

  • print_level::Int: control the level of printing to the screen. Defaults to 1. Set to 0 to disable all printing.

  • log_file::String: filepath at which to write a log of the training progress. Defaults to SDDP.log.

  • log_frequency::Int: control the frequency with which the logging is outputted (iterations/log). It must be at least 1. Defaults to 1.

  • log_every_seconds::Float64: control the frequency with which the logging is outputted (seconds/log). Defaults to 0.0.

  • log_every_iteration::Bool; over-rides log_frequency and log_every_seconds to force every iteration to be printed. Defaults to false.

  • run_numerical_stability_report::Bool: generate (and print) a numerical stability report prior to solve. Defaults to true.

  • refine_at_similar_nodes::Bool: if SDDP can detect that two nodes have the same children, it can cheaply add a cut discovered at one to the other. In almost all cases this should be set to true.

  • cut_deletion_minimum::Int: the minimum number of cuts to cache before deleting cuts from the subproblem. The impact on performance is solver specific; however, smaller values result in smaller subproblems (and therefore quicker solves), at the expense of more time spent performing cut selection.

  • risk_measure: the risk measure to use at each node. Defaults to Expectation.

  • root_node_risk_measure::AbstractRiskMeasure: the risk measure to use at the root node when computing the Bound column. Note that the choice of this option does not change the primal policy, and it applies only if the transition from the root node to the first stage is stochastic. Defaults to Expectation.

  • sampling_scheme: a sampling scheme to use on the forward pass of the algorithm. Defaults to InSampleMonteCarlo.

  • backward_sampling_scheme: a backward pass sampling scheme to use on the backward pass of the algorithm. Defaults to CompleteSampler.

  • cut_type: choose between SDDP.SINGLE_CUT and SDDP.MULTI_CUT versions of SDDP.

  • dashboard::Bool: open a visualization of the training over time. Defaults to false.

  • parallel_scheme::AbstractParallelScheme: specify a scheme for solving in parallel. Defaults to Threaded().

  • forward_pass::AbstractForwardPass: specify a scheme to use for the forward passes.

  • forward_pass_resampling_probability::Union{Nothing,Float64}: set to a value in (0, 1) to enable RiskAdjustedForwardPass. Defaults to nothing (disabled).

  • add_to_existing_cuts::Bool: set to true to allow training a model that was previously trained. Defaults to false.

  • duality_handler::AbstractDualityHandler: specify a duality handler to use when creating cuts.

  • post_iteration_callback::Function: a callback with the signature post_iteration_callback(::IterationResult) that is evaluated after each iteration of the algorithm.

There is also a special option for infinite horizon problems

  • cycle_discretization_delta: the maximum distance between states allowed on the forward pass. This is for advanced users only and needs to be used in conjunction with a different sampling_scheme.
source

termination_status

write_cuts_to_file

SDDP.write_cuts_to_fileFunction
write_cuts_to_file(
    model::PolicyGraph{T},
    filename::String;
    kwargs...,
) where {T}

Write the cuts that form the policy in model to filename in JSON format.

Keyword arguments

  • node_name_parser is a function which converts the name of each node into a string representation. It has the signature: node_name_parser(::T)::String.

  • write_only_selected_cuts write only the selected cuts to the json file. Defaults to false.

See also SDDP.read_cuts_from_file.

source

read_cuts_from_file

SDDP.read_cuts_from_fileFunction
read_cuts_from_file(
    model::PolicyGraph{T},
    filename::String;
    kwargs...,
) where {T}

Read cuts (saved using SDDP.write_cuts_to_file) from filename into model.

Since T can be an arbitrary Julia type, the conversion to JSON is lossy. When reading, read_cuts_from_file only supports T=Int, T=NTuple{N, Int}, and T=Symbol. If you have manually created a policy graph with a different node type T, provide a function node_name_parser with the signature

Keyword arguments

  • node_name_parser(T, name::String)::T where {T} that returns the name of each node given the string name name. If node_name_parser returns nothing, those cuts are skipped.

  • cut_selection::Bool run or not the cut selection algorithm when adding the cuts to the model.

See also SDDP.write_cuts_to_file.

source

write_log_to_csv

SDDP.write_log_to_csvFunction
write_log_to_csv(model::PolicyGraph, filename::String)

Write the log of the most recent training to a csv for post-analysis.

Assumes that the model has been trained via SDDP.train.

source

set_numerical_difficulty_callback

SDDP.set_numerical_difficulty_callbackFunction
set_numerical_difficulty_callback(
    model::PolicyGraph,
    callback::Function,
)

Set a callback function callback(::PolicyGraph, ::Node; require_dual::Bool) that is run when the optimizer terminates without finding a primal solution (and dual solution if require_dual is true).

Default callback

The default callback is a small variation of:

function callback(::PolicyGraph, node::Node; require_dual::Bool)
    MOI.Utilities.reset_optimizer(node.subproblem)
    optimize!(node.subproblem)
    return
end

This callback is the default because a common issue is solvers declaring the infeasible because of numerical issues related to the large number of cutting planes. Resetting the subproblem–-and therefore starting from a fresh problem instead of warm-starting from the previous solution–-is often enough to fix the problem and allow more iterations.

Other callbacks

In cases where the problem is truely infeasible (not because of numerical issues ), it may be helpful to write out the irreducible infeasible subsystem (IIS) for debugging. For this use-case, use a callback as follows:

function callback(::PolicyGraph, node::Node; require_dual::Bool)
    JuMP.compute_conflict!(node.suprobblem)
    status = JuMP.get_attribute(node.subproblem, MOI.ConflictStatus())
    if status == MOI.CONFLICT_FOUND
        iis_model, _ = JuMP.copy_conflict(node.subproblem)
        print(iis_model)
    end
    return
end
SDDP.set_numerical_difficulty_callback(model, callback)
source

AbstractStoppingRule

stopping_rule_status

convergence_test

SDDP.convergence_testFunction
convergence_test(
    model::PolicyGraph,
    log::Vector{Log},
    ::AbstractStoppingRule,
)::Bool

Return a Bool indicating if the algorithm should terminate the training.

source

IterationLimit

TimeLimit

SDDP.TimeLimitType
TimeLimit(limit::Float64)

Teriminate the algorithm after limit seconds of computation.

source

Statistical

SDDP.StatisticalType
Statistical(;
    num_replications::Int,
    iteration_period::Int = 1,
    z_score::Float64 = 1.96,
    verbose::Bool = true,
    disable_warning::Bool = false,
)

Perform an in-sample Monte Carlo simulation of the policy with num_replications replications every iteration_periods and terminate if the deterministic bound (lower if minimizing) falls into the confidence interval for the mean of the simulated cost.

If verbose = true, print the confidence interval.

If disable_warning = true, disable the warning telling you not to use this stopping rule (see below).

Why this stopping rule is not good

This stopping rule is one of the most common stopping rules seen in the literature. Don't follow the crowd. It is a poor choice for your model, and should be rarely used. Instead, you should use the default stopping rule, or use a fixed limit like a time or iteration limit.

To understand why this stopping rule is a bad idea, assume we have conducted num_replications simulations and the objectives are in a vector objectives::Vector{Float64}.

Our mean is μ = mean(objectives) and the half-width of the confidence interval is w = z_score * std(objectives) / sqrt(num_replications).

Many papers suggest terminating the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) is contained within the confidence interval. That is, if μ - w <= bound <= μ + w. Even worse, some papers define an optimization gap of (μ + w) / bound (if minimizing) or (μ - w) / bound (if maximizing), and they terminate once the gap is less than a value like 1%.

Both of these approaches are misleading, and more often than not, they will result in terminating with a sub-optimal policy that performs worse than expected. There are two main reasons for this:

  1. The half-width depends on the number of replications. To reduce the computational cost, users are often tempted to choose a small number of replications. This increases the half-width and makes it more likely that the algorithm will stop early. But if we choose a large number of replications, then the computational cost is high, and we would have been better off to run a fixed number of iterations and use that computational time to run extra training iterations.
  2. The confidence interval assumes that the simulated values are normally distributed. In infinite horizon models, this is almost never the case. The distribution is usually closer to exponential or log-normal.

There is a third, more technical reason which relates to the conditional dependence of constructing multiple confidence intervals.

The default value of z_score = 1.96 corresponds to a 95% confidence interval. You should interpret the interval as "if we re-run this simulation 100 times, then the true mean will lie in the confidence interval 95 times out of 100." But if the bound is within the confidence interval, then we know the true mean cannot be better than the bound. Therfore, there is a more than 95% chance that the mean is within the interval.

A separate problem arises if we simulate, find that the bound is outside the confidence interval, keep training, and then re-simulate to compute a new confidence interval. Because we will terminate when the bound enters the confidence interval, the repeated construction of a confidence interval means that the unconditional probability that we terminate with a false positive is larger than 5% (there are now more chances that the sample mean is optimistic and that the confidence interval includes the bound but not the true mean). One fix is to simulate with a sequentially increasing number of replicates, so that the unconditional probability stays at 95%, but this runs into the problem of computational cost. For more information on sequential sampling, see, for example, Güzin Bayraksan, David P. Morton, (2011) A Sequential Sampling Procedure for Stochastic Programming. Operations Research 59(4):898-913.

source

BoundStalling

SDDP.BoundStallingType
BoundStalling(num_previous_iterations::Int, tolerance::Float64)

Teriminate the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) fails to improve by more than tolerance in absolute terms for more than num_previous_iterations consecutve iterations, provided it has improved relative to the bound after the first iteration.

Checking for an improvement relative to the first iteration avoids early termination in a situation where the bound fails to improve for the first N iterations. This frequently happens in models with a large number of stages, where it takes time for the cuts to propogate backward enough to modify the bound of the root node.

source

StoppingChain

SDDP.StoppingChainType
StoppingChain(rules::AbstractStoppingRule...)

Terminate once all of the rules are statified.

This stopping rule short-circuits, so subsequent rules are only tested if the previous pass.

Examples

A stopping rule that runs 100 iterations, then checks for the bound stalling:

StoppingChain(IterationLimit(100), BoundStalling(5, 0.1))
source

SimulationStoppingRule

SDDP.SimulationStoppingRuleType
SimulationStoppingRule(;
    sampling_scheme::AbstractSamplingScheme = SDDP.InSampleMonteCarlo(),
    replications::Int = -1,
    period::Int = -1,
    distance_tol::Float64 = 1e-2,
    bound_tol::Float64 = 1e-4,
)

Terminate the algorithm using a mix of heuristics. Unless you know otherwise, this is typically a good default.

Termination criteria

First, we check that the deterministic bound has stabilized. That is, over the last five iterations, the deterministic bound has changed by less than an absolute or relative tolerance of bound_tol.

Then, if we have not done one in the last period iterations, we perform a primal simulation of the policy using replications out-of-sample realizations from sampling_scheme. The realizations are stored and re-used in each simulation. From each simulation, we record the value of the stage objective. We terminate the policy if each of the trajectories in two consecutive simulations differ by less than distance_tol.

By default, replications and period are -1, and SDDP.jl will guess good values for these. Over-ride the default behavior by setting an appropriate value.

Example

SDDP.train(model; stopping_rules = [SimulationStoppingRule()])
source

FirstStageStoppingRule

SDDP.FirstStageStoppingRuleType
FirstStageStoppingRule(; atol::Float64 = 1e-3, iterations::Int = 50)

Terminate the algorithm when the outgoing values of the first-stage state variables have not changed by more than atol for iterations number of consecutive iterations.

Example

SDDP.train(model; stopping_rules = [FirstStageStoppingRule()])
source

AbstractSamplingScheme

sample_scenario

SDDP.sample_scenarioFunction
sample_scenario(graph::PolicyGraph{T}, ::AbstractSamplingScheme) where {T}

Sample a scenario from the policy graph graph based on the sampling scheme.

Returns ::Tuple{Vector{Tuple{T, <:Any}}, Bool}, where the first element is the scenario, and the second element is a Boolean flag indicating if the scenario was terminated due to the detection of a cycle.

The scenario is a list of tuples (type Vector{Tuple{T, <:Any}}) where the first component of each tuple is the index of the node, and the second component is the stagewise-independent noise term observed in that node.

source

InSampleMonteCarlo

SDDP.InSampleMonteCarloType
InSampleMonteCarlo(;
    max_depth::Int = 0,
    terminate_on_cycle::Function = false,
    terminate_on_dummy_leaf::Function = true,
    rollout_limit::Function = (i::Int) -> typemax(Int),
    initial_node::Any = nothing,
)

A Monte Carlo sampling scheme using the in-sample data from the policy graph definition.

If terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.

Note that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.

Control which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.

You can use rollout_limit to set iteration specific depth limits. For example:

InSampleMonteCarlo(rollout_limit = i -> 2 * i)
source

OutOfSampleMonteCarlo

SDDP.OutOfSampleMonteCarloType
OutOfSampleMonteCarlo(
    f::Function,
    graph::PolicyGraph;
    use_insample_transition::Bool = false,
    max_depth::Int = 0,
    terminate_on_cycle::Bool = false,
    terminate_on_dummy_leaf::Bool = true,
    rollout_limit::Function = i -> typemax(Int),
    initial_node = nothing,
)

Create a Monte Carlo sampler using out-of-sample probabilities and/or supports for the stagewise-independent noise terms, and out-of-sample probabilities for the node-transition matrix.

f is a function that takes the name of a node and returns a tuple containing a vector of new SDDP.Noise terms for the children of that node, and a vector of new SDDP.Noise terms for the stagewise-independent noise.

If f is called with the name of the root node (e.g., 0 in a linear policy graph, (0, 1) in a Markovian Policy Graph), then return a vector of SDDP.Noise for the children of the root node.

If use_insample_transition, the in-sample transition probabilities will be used. Therefore, f should only return a vector of the stagewise-independent noise terms, and f will not be called for the root node.

If terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.

Note that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.

Control which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.

If a node is deterministic, pass [SDDP.Noise(nothing, 1.0)] as the vector of noise terms.

You can use rollout_limit to set iteration specific depth limits. For example:

OutOfSampleMonteCarlo(rollout_limit = i -> 2 * i)

Examples

Given linear policy graph graph with T stages:

sampler = OutOfSampleMonteCarlo(graph) do node
    if node == 0
        return [SDDP.Noise(1, 1.0)]
    else
        noise_terms = [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]
        children = node < T ? [SDDP.Noise(node + 1, 0.9)] : SDDP.Noise{Int}[]
        return children, noise_terms
    end
end

Given linear policy graph graph with T stages:

sampler = OutOfSampleMonteCarlo(graph, use_insample_transition=true) do node
    return [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]
end
source

Historical

SDDP.HistoricalType
Historical(
    scenarios::Vector{Vector{Tuple{T,S}}},
    probability::Vector{Float64};
    terminate_on_cycle::Bool = false,
) where {T,S}

A sampling scheme that samples a scenario from the vector of scenarios scenarios according to probability.

Examples

Historical(
    [
        [(1, 0.5), (2, 1.0), (3, 0.5)],
        [(1, 0.5), (2, 0.0), (3, 1.0)],
        [(1, 1.0), (2, 0.0), (3, 0.0)]
    ],
    [0.2, 0.5, 0.3],
)
source
Historical(
    scenarios::Vector{Vector{Tuple{T,S}}};
    terminate_on_cycle::Bool = false,
) where {T,S}

A deterministic sampling scheme that iterates through the vector of provided scenarios.

Examples

Historical([
    [(1, 0.5), (2, 1.0), (3, 0.5)],
    [(1, 0.5), (2, 0.0), (3, 1.0)],
    [(1, 1.0), (2, 0.0), (3, 0.0)],
])
source
Historical(
    scenario::Vector{Tuple{T,S}};
    terminate_on_cycle::Bool = false,
) where {T,S}

A deterministic sampling scheme that always samples scenario.

Examples

Historical([(1, 0.5), (2, 1.5), (3, 0.75)])
source

PSRSamplingScheme

SDDP.PSRSamplingSchemeType
PSRSamplingScheme(N::Int; sampling_scheme = InSampleMonteCarlo())

A sampling scheme with N scenarios, similar to how PSR does it.

source

SimulatorSamplingScheme

SDDP.SimulatorSamplingSchemeType
SimulatorSamplingScheme(simulator::Function)

Create a sampling scheme based on a univariate scenario generator simulator, which returns a Vector{Float64} when called with no arguments like simulator().

This sampling scheme must be used with a Markovian graph constructed from the same simulator.

The sample space for SDDP.parameterize must be a tuple with 1 or 2 values, value is the Markov state and the second value is the random variable for the current node. If the node is deterministic, use Ω = [(markov_state,)].

This sampling scheme generates a new scenario by calling simulator(), and then picking the sequence of nodes in the Markovian graph that is closest to the new trajectory.

Example

julia> using SDDP

julia> import HiGHS

julia> simulator() = cumsum(rand(10))
simulator (generic function with 1 method)

julia> model = SDDP.PolicyGraph(
           SDDP.MarkovianGraph(simulator; budget = 20, scenarios = 100);
           sense = :Max,
           upper_bound = 12,
           optimizer = HiGHS.Optimizer,
       ) do sp, node
           t, markov_state = node
           @variable(sp, x >= 0, SDDP.State, initial_value = 1)
           @variable(sp, u >= 0)
           @constraint(sp, x.out == x.in - u)
           # Elements of Ω MUST be a tuple in which `markov_state` is the first
           # element.
           Ω = [(markov_state, (u = u_max,)) for u_max in (0.0, 0.5)]
           SDDP.parameterize(sp, Ω) do (markov_state, ω)
               set_upper_bound(u, ω.u)
               @stageobjective(sp, markov_state * u)
           end
       end;

julia> SDDP.train(
           model;
           print_level = 0,
           iteration_limit = 10,
           sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),
       )
source

AbstractParallelScheme

Serial

Threaded

SDDP.ThreadedType
Threaded()

Run SDDP in multi-threaded mode.

Use julia --threads N to start Julia with N threads. In most cases, you should pick N to be the number of physical cores on your machine.

Danger

This plug-in is experimental, and parts of SDDP.jl may not be threadsafe. If you encounter any problems or crashes, please open a GitHub issue.

Example

SDDP.train(model; parallel_scheme = SDDP.Threaded())
SDDP.simulate(model; parallel_scheme = SDDP.Threaded())
source

Asynchronous

SDDP.AsynchronousType
Asynchronous(
    [init_callback::Function,]
    slave_pids::Vector{Int} = workers();
    use_master::Bool = true,
)

Run SDDP in asynchronous mode workers with pid's slave_pids.

After initializing the models on each worker, call init_callback(model). Note that init_callback is run locally on the worker and not on the master thread.

If use_master is true, iterations are also conducted on the master process.

source
Asynchronous(
    solver::Any,
    slave_pids::Vector{Int} = workers();
    use_master::Bool = true,
)

Run SDDP in asynchronous mode workers with pid's slave_pids.

Set the optimizer on each worker by calling JuMP.set_optimizer(model, solver).

source

AbstractForwardPass

DefaultForwardPass

SDDP.DefaultForwardPassType
DefaultForwardPass(; include_last_node::Bool = true)

The default forward pass.

If include_last_node = false and the sample terminated due to a cycle, then the last node (which forms the cycle) is omitted. This can be useful option to set when training, but it comes at the cost of not knowing which node formed the cycle (if there are multiple possibilities).

source

RevisitingForwardPass

SDDP.RevisitingForwardPassType
RevisitingForwardPass(
    period::Int = 500;
    sub_pass::AbstractForwardPass = DefaultForwardPass(),
)

A forward pass scheme that generate period new forward passes (using sub_pass), then revisits all previously explored forward passes. This can be useful to encourage convergence at a diversity of points in the state-space.

Set period = typemax(Int) to disable.

For example, if period = 2, then the forward passes will be revisited as follows: 1, 2, 1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 1, 2, ....

source

RiskAdjustedForwardPass

SDDP.RiskAdjustedForwardPassType
RiskAdjustedForwardPass(;
    forward_pass::AbstractForwardPass,
    risk_measure::AbstractRiskMeasure,
    resampling_probability::Float64,
    rejection_count::Int = 5,
)

A forward pass that resamples a previous forward pass with resampling_probability probability, and otherwise samples a new forward pass using forward_pass.

The forward pass to revisit is chosen based on the risk-adjusted (using risk_measure) probability of the cumulative stage objectives.

Note that this objective corresponds to the first time we visited the trajectory. Subsequent visits may have improved things, but we don't have the mechanisms in-place to update it. Therefore, remove the forward pass from resampling consideration after rejection_count revisits.

source

AlternativeForwardPass

SDDP.AlternativeForwardPassType
AlternativeForwardPass(
    forward_model::SDDP.PolicyGraph{T};
    forward_pass::AbstractForwardPass = DefaultForwardPass(),
)

A forward pass that simulates using forward_model, which may be different to the model used in the backwards pass.

When using this forward pass, you should almost always pass SDDP.AlternativePostIterationCallback to the post_iteration_callback argument of SDDP.train.

This forward pass is most useful when the forward_model is non-convex and we use a convex approximation of the model in the backward pass.

For example, in optimal power flow models, we can use an AC-OPF formulation as the forward_model and a DC-OPF formulation as the backward model.

For more details see the paper:

Rosemberg, A., and Street, A., and Garcia, J.D., and Valladão, D.M., and Silva, T., and Dowson, O. (2021). Assessing the cost of network simplifications in long-term hydrothermal dispatch planning models. IEEE Transactions on Sustainable Energy. 13(1), 196-206.

source

AlternativePostIterationCallback

RegularizedForwardPass

SDDP.RegularizedForwardPassType
RegularizedForwardPass(;
    rho::Float64 = 0.05,
    forward_pass::AbstractForwardPass = DefaultForwardPass(),
)

A forward pass that regularizes the outgoing first-stage state variables with an L-infty trust-region constraint about the previous iteration's solution. Specifically, the bounds of the outgoing state variable x are updated from (l, u) to max(l, x^k - rho * (u - l)) <= x <= min(u, x^k + rho * (u - l)), where x^k is the optimal solution of x in the previous iteration. On the first iteration, the value of the state at the root node is used.

By default, rho is set to 5%, which seems to work well empirically.

Pass a different forward_pass to control the forward pass within the regularized forward pass.

This forward pass is largely intended to be used for investment problems in which the first stage makes a series of capacity decisions that then influence the rest of the graph. An error is thrown if the first stage problem is not deterministic, and states are silently skipped if they do not have finite bounds.

source

AbstractRiskMeasure

adjust_probability

SDDP.adjust_probabilityFunction
adjust_probability(
    measure::Expectation
    risk_adjusted_probability::Vector{Float64},
    original_probability::Vector{Float64},
    noise_support::Vector{Noise{T}},
    objective_realizations::Vector{Float64},
    is_minimization::Bool,
) where {T}
source

Expectation

SDDP.ExpectationType
Expectation()

The Expectation risk measure.

This risk measure is identical to taking the expectation with respect to the nominal distribution.

Example

julia> risk_adjusted_probability = zeros(4);

julia> SDDP.adjust_probability(
           SDDP.Expectation(),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.1
 0.2
 0.3
 0.4
source

WorstCase

SDDP.WorstCaseType
WorstCase()

The worst-case risk measure.

This risk measure places all of the probability weight on the worst outcome.

Example

julia> risk_adjusted_probability = zeros(4);

julia> SDDP.adjust_probability(
           SDDP.WorstCase(),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.0
 0.0
 1.0
 0.0
source

AVaR

SDDP.AVaRType
AVaR(β)

The average value at risk (AV@R) risk measure.

This risk measure computes the expectation of the β fraction of worst outcomes. β must be in [0, 1].

When β=1, this is equivalent to the Expectation risk measure. When β=0, this is equivalent to the WorstCase risk measure.

AV@R is also known as the conditional value at risk (CV@R) or expected shortfall.

Example

julia> risk_adjusted_probability = zeros(4);

julia> SDDP.adjust_probability(
           SDDP.AVaR(0.5),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.2
 0.19999999999999996
 0.6
 0.0

julia> SDDP.adjust_probability(
           SDDP.AVaR(1.0),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.1
 0.2
 0.3
 0.4

julia> SDDP.adjust_probability(
           SDDP.AVaR(0.0),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.0
 0.0
 1.0
 0.0
source

CVaR

SDDP.CVaRType
CVaR(γ)

The conditional value at risk (CV@R) risk measure.

This risk measure computes the expectation of the γ fraction of worst outcomes. γ must be in [0, 1].

When γ=1, this is equivalent to the Expectation risk measure. When γ=0, this is equivalent to the WorstCase risk measure.

CV@R is also known as the average value at risk (AV@R) or expected shortfall.

Example

julia> risk_adjusted_probability = zeros(4);

julia> SDDP.adjust_probability(
           SDDP.CVaR(0.5),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.2
 0.19999999999999996
 0.6
 0.0

julia> SDDP.adjust_probability(
           SDDP.CVaR(1.0),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.1
 0.2
 0.3
 0.4

julia> SDDP.adjust_probability(
           SDDP.CVaR(0.0),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.0
 0.0
 1.0
 0.0
source

ConvexCombination

SDDP.ConvexCombinationType
ConvexCombination((weight::Float64, measure::AbstractRiskMeasure)...)

Create a weighted combination of risk measures.

Examples

julia> SDDP.ConvexCombination(
           (0.5, SDDP.Expectation()),
           (0.5, SDDP.AVaR(0.25))
       )
A convex combination of 0.5 * SDDP.Expectation() + 0.5 * SDDP.AVaR(0.25)

Convex combinations can also be constructed by adding weighted risk measures together as follows:

julia> 0.5 * SDDP.Expectation() + 0.5 * SDDP.AVaR(0.5)
A convex combination of 0.5 * SDDP.Expectation() + 0.5 * SDDP.AVaR(0.5)
source

EAVaR

SDDP.EAVaRFunction
EAVaR(;lambda=1.0, beta=1.0)

A risk measure that is a convex combination of Expectation and Average Value @ Risk (also called Conditional Value @ Risk).

    λ * E[x] + (1 - λ) * AV@R(β)[x]

Keyword Arguments

  • lambda: Convex weight on the expectation ((1-lambda) weight is put on the AV@R component. Inreasing values of lambda are less risk averse (more weight on expectation).

  • beta: The quantile at which to calculate the Average Value @ Risk. Increasing values of beta are less risk averse. If beta=0, then the AV@R component is the worst case risk measure.

Example

julia> SDDP.EAVaR(; lambda = 1.0, beta = 1.0)
A convex combination of 1.0 * SDDP.Expectation() + 0.0 * SDDP.AVaR(1.0)

julia> SDDP.EAVaR(; lambda = 0.0, beta = 1.0)
A convex combination of 0.0 * SDDP.Expectation() + 1.0 * SDDP.AVaR(1.0)

julia> SDDP.EAVaR(; lambda = 0.5, beta = 0.5)
A convex combination of 0.5 * SDDP.Expectation() + 0.5 * SDDP.AVaR(0.5)
source

ModifiedChiSquared

SDDP.ModifiedChiSquaredType
ModifiedChiSquared(radius::Float64; minimum_std=1e-5)

The distributionally robust SDDP risk measure of Philpott, A., de Matos, V., Kapelevich, L. Distributionally robust SDDP. Computational Management Science (2018) 165:431-454.

Explanation

In a Distributionally Robust Optimization (DRO) approach, we modify the probabilities we associate with all future scenarios so that the resulting probability distribution is the "worst case" probability distribution, in some sense.

In each backward pass we will compute a worst case probability distribution vector p. We compute p so that:

p ∈ argmax p'z
      s.t. [r; p - a] in SecondOrderCone()
           sum(p) == 1
           p >= 0

where

  1. z is a vector of future costs. We assume that our aim is to minimize future cost p'z. If we maximize reward, we would have p ∈ argmin{p'z}.
  2. a is the uniform distribution
  3. r is a user specified radius - the larger the radius, the more conservative the policy.

Notes

The largest radius that will work with S scenarios is sqrt((S-1)/S).

If the uncorrected standard deviation of the objecive realizations is less than minimum_std, then the risk-measure will default to Expectation().

This code was contributed by Lea Kapelevich.

Example

julia> risk_adjusted_probability = zeros(4);

julia> SDDP.adjust_probability(
           SDDP.ModifiedChiSquared(0.5),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.2267731382092775
 0.1577422872635742
 0.5958039891549808
 0.019680585372167547

julia> SDDP.adjust_probability(
           SDDP.ModifiedChiSquared(0.5),
           risk_adjusted_probability,
           [0.25, 0.25, 0.25, 0.25],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.25, 0.25, 0.25, 0.25]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.3333333333333333
 0.044658198738520394
 0.6220084679281462
 0.0
source

Entropic

SDDP.EntropicType
Entropic(γ::Float64)

The entropic risk measure as described by:

Dowson, O., Morton, D.P. & Pagnoncelli, B.K. Incorporating convex risk
measures into multistage stochastic programming algorithms. Annals of
Operations Research (2022). [doi](https://doi.org/10.1007/s10479-022-04977-w).

As γ increases, the measure becomes more risk-averse.

Example

julia> risk_adjusted_probability = zeros(4);

julia> SDDP.adjust_probability(
           SDDP.Entropic(0.1),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
-0.14333892665462006

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.1100296362588547
 0.19911786395979578
 0.3648046623591841
 0.3260478374221655

julia> SDDP.adjust_probability(
           SDDP.Entropic(1.0),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
-0.12038063114659443

julia> risk_adjusted_probability
4-element Vector{Float64}:
 0.09911045746726178
 0.07292139941460454
 0.8082304666305623
 0.019737676487571337

julia> SDDP.adjust_probability(
           SDDP.Entropic(10.0),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1, 2, 3, 4], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
-0.12038063114659443

julia> risk_adjusted_probability
4-element Vector{Float64}:
 1.5133080886430772e-5
 1.374081618667918e-9
 0.999984865545032
 5.664386611687232e-18
source

Wasserstein

SDDP.WassersteinType
Wasserstein(norm::Function, solver_factory; alpha::Float64)

A distributionally-robust risk measure based on the Wasserstein distance.

As alpha increases, the measure becomes more risk-averse. When alpha=0, the measure is equivalent to the expectation operator. As alpha increases, the measure approaches the Worst-case risk measure.

norm

The norm argument is a fuction that computes the distance between two supports of your distribution. It must have the signature:

wasserstein_norm(x::SDDP.Noise, y::SDDP.Noise)::Float64

The input arguments are of type Noise. The .term values will depend on what supports you passed to parameterize.

Example

julia> import HiGHS

julia> risk_adjusted_probability = zeros(4);

julia> wasserstein_norm(x::SDDP.Noise, y::SDDP.Noise) = abs(x.term - y.term);

julia> SDDP.adjust_probability(
           SDDP.Wasserstein(wasserstein_norm, HiGHS.Optimizer; alpha = 0.5),
           risk_adjusted_probability,
           [0.1, 0.2, 0.3, 0.4],  # nominal_probability,
           SDDP.Noise.([1.0, 2.0, 3.0, 4.0], [0.1, 0.2, 0.3, 0.4]),  # noise_supports,
           [5.0, 4.0, 6.0, 2.0],  # cost_realizations,
           true,                  # is_minimization
       )
0.0

julia> risk_adjusted_probability
4-element Vector{Float64}:
  0.1
  0.10000000000000003
  0.7999999999999999
 -0.0
source

AbstractDualityHandler

ContinuousConicDuality

SDDP.ContinuousConicDualityType
ContinuousConicDuality()

Compute dual variables in the backward pass using conic duality, relaxing any binary or integer restrictions as necessary.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
 st (x̄, x′, u) in Xᵢ(w) ∩ S
    x̄ - x == 0          [λ]

where S ⊆ ℝ×ℤ, we relax integrality and using conic duality to solve for λ in the problem:

min Cᵢ(x̄, u, w) + θᵢ
 st (x̄, x′, u) in Xᵢ(w)
    x̄ - x == 0          [λ]
source

LagrangianDuality

SDDP.LagrangianDualityType
LagrangianDuality(;
    method::LocalImprovementSearch.AbstractSearchMethod =
        LocalImprovementSearch.BFGS(100),
)

Obtain dual variables in the backward pass using Lagrangian duality.

Arguments

  • method: the LocalImprovementSearch method for maximizing the Lagrangian dual problem.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
 st (x̄, x′, u) in Xᵢ(w) ∩ S
    x̄ - x == 0          [λ]

where S ⊆ ℝ×ℤ, we solve the problem max L(λ), where:

L(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' h(x̄)
        st (x̄, x′, u) in Xᵢ(w) ∩ S

and where h(x̄) = x̄ - x.

source

StrengthenedConicDuality

SDDP.StrengthenedConicDualityType
StrengthenedConicDuality()

Obtain dual variables in the backward pass using strengthened conic duality.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
 st (x̄, x′, u) in Xᵢ(w) ∩ S
    x̄ - x == 0          [λ]

we first obtain an estimate for λ using ContinuousConicDuality.

Then, we evaluate the Lagrangian function:

L(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' (x̄ - x`)
        st (x̄, x′, u) in Xᵢ(w) ∩ S

to obtain a better estimate of the intercept.

source

BanditDuality

SDDP.BanditDualityType
BanditDuality()

Formulates the problem of choosing a duality handler as a multi-armed bandit problem. The arms to choose between are:

Our problem isn't a typical multi-armed bandit for a two reasons:

  1. The reward distribution is non-stationary (each arm converges to 0 as it keeps getting pulled.
  2. The distribution of rewards is dependent on the history of the arms that were chosen.

We choose a very simple heuristic: pick the arm with the best mean + 1 standard deviation. That should ensure we consistently pick the arm with the best likelihood of improving the value function.

In future, we should consider discounting the rewards of earlier iterations, and focus more on the more-recent rewards.

source

simulate

SDDP.simulateFunction
simulate(
    model::PolicyGraph,
    number_replications::Int = 1,
    variables::Vector{Symbol} = Symbol[];
    sampling_scheme::AbstractSamplingScheme =
        InSampleMonteCarlo(),
    custom_recorders = Dict{Symbol, Function}(),
    duality_handler::Union{Nothing,AbstractDualityHandler} = nothing,
    skip_undefined_variables::Bool = false,
    parallel_scheme::AbstractParallelScheme = Serial(),
    incoming_state::Dict{String,Float64} = _initial_state(model),
 )::Vector{Vector{Dict{Symbol,Any}}}

Perform a simulation of the policy model with number_replications replications.

Return data structure

Returns a vector with one element for each replication. Each element is a vector with one-element for each node in the scenario that was sampled. Each element in that vector is a dictionary containing information about the subproblem that was solved.

In that dictionary there are four special keys:

  • :node_index, which records the index of the sampled node in the policy model

  • :noise_term, which records the noise observed at the node

  • :stage_objective, which records the stage-objective of the subproblem

  • :bellman_term, which records the cost/value-to-go of the node.

The sum of :stage_objective + :bellman_term will equal the objective value of the solved subproblem.

In addition to the special keys, the dictionary will contain the result of key => JuMP.value(subproblem[key]) for each key in variables. This is useful to obtain the primal value of the state and control variables.

Positonal arguments

  • model: the model to simulate

  • number_replications::Int = 1: the number of simulation replications to conduct, that is, the length of the simulation vector that is returned by this function. If omitted, this defaults to 1.`

  • variables::Vector{Symbol} = Symbol[]: a list of the variable names to record the value of in each stage.

Keyword arguments

  • sampling_scheme: the sampling scheme used when simulating.

  • custom_recorders: see Custom recorders section below.

  • duality_handler: the SDDP.AbstractDualityHandler used to compute dual variables. If you do not require dual variables (or if they are not available), pass duality_handler = nothing.

  • skip_undefined_variables: If you attempt to simulate the value of a variable that is only defined in some of the stage problems, an error will be thrown. To over-ride this (and return a NaN instead), pass skip_undefined_variables = true.

  • parallel_scheme: Use parallel_scheme::[AbstractParallelScheme](@ref) to specify a scheme for simulating in parallel. Defaults to Serial.

  • initial_state: Use incoming_state to pass an initial value of the state variable, if it differs from that at the root node. Each key should be the string name of the state variable.

Custom recorders

For more complicated data, the custom_recorders keyword argument can be used.

For example, to record the dual of a constraint named my_constraint, pass the following:

simulation_results = SDDP.simulate(model, 2;
    custom_recorders = Dict{Symbol, Function}(
        :constraint_dual => sp -> JuMP.dual(sp[:my_constraint])
    )
)

The value of the dual in the first stage of the second replication can be accessed as:

simulation_results[2][1][:constraint_dual]
source

calculate_bound

SDDP.calculate_boundFunction
SDDP.calculate_bound(
    model::PolicyGraph,
    state::Dict{Symbol,Float64} = model.initial_root_state;
    risk_measure::AbstractRiskMeasure = Expectation(),
)

Calculate the lower bound (if minimizing, otherwise upper bound) of the problem model at the point state, assuming the risk measure at the root node is risk_measure.

source

add_all_cuts

SDDP.add_all_cutsFunction
add_all_cuts(model::PolicyGraph)

Add all cuts that may have been deleted back into the model.

Explanation

During the solve, SDDP.jl may decide to remove cuts for a variety of reasons.

These can include cuts that define the optimal value function, particularly around the extremes of the state-space (e.g., reservoirs empty).

This function ensures that all cuts discovered are added back into the model.

You should call this after train and before simulate.

source

Decision rules

DecisionRule

SDDP.DecisionRuleType
DecisionRule(model::PolicyGraph{T}; node::T)

Create a decision rule for node node in model.

Example

rule = SDDP.DecisionRule(model; node = 1)
source

evaluate

SDDP.evaluateFunction
evaluate(
    rule::DecisionRule;
    incoming_state::Dict{Symbol,Float64},
    noise = nothing,
    controls_to_record = Symbol[],
)

Evalute the decision rule rule at the point described by the incoming_state and noise.

If the node is deterministic, omit the noise argument.

Pass a list of symbols to controls_to_record to save the optimal primal solution corresponding to the names registered in the model.

source
evaluate(
    V::ValueFunction,
    point::Dict{Union{Symbol,String},<:Real}
    objective_state = nothing,
    belief_state = nothing
)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
evalute(V::ValueFunction{Nothing, Nothing}; kwargs...)

Evalute the value function V at the point in the state-space specified by kwargs.

Examples

evaluate(V; volume = 1)
source
evaluate(
    model::PolicyGraph{T},
    validation_scenarios::ValidationScenarios{T,S},
) where {T,S}

Evaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
train(model; iteration_limit = 100)
simulations = evaluate(model, validation_scenarios)
source

SpaghettiPlot

SDDP.SpaghettiPlotType
SDDP.SpaghettiPlot(; stages, scenarios)

Initialize a new SpaghettiPlot with stages stages and scenarios number of replications.

source

add_spaghetti

SDDP.add_spaghettiFunction
SDDP.add_spaghetti(data_function::Function, plt::SpaghettiPlot; kwargs...)

Description

Add a new figure to the SpaghettiPlot plt, where the y-value of the scenarioth line when x = stage is given by data_function(plt.simulations[scenario][stage]).

Keyword arguments

  • xlabel: set the xaxis label
  • ylabel: set the yaxis label
  • title: set the title of the plot
  • ymin: set the minimum y value
  • ymax: set the maximum y value
  • cumulative: plot the additive accumulation of the value across the stages
  • interpolate: interpolation method for lines between stages.

Defaults to "linear" see the d3 docs for all options.

Examples

simulations = simulate(model, 10)
plt = SDDP.spaghetti_plot(simulations)
SDDP.add_spaghetti(plt; title = "Stage objective") do data
    return data[:stage_objective]
end
source

publication_plot

SDDP.publication_plotFunction
SDDP.publication_plot(
    data_function, simulations;
    quantile = [0.0, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0],
    kwargs...)

Create a Plots.jl recipe plot of the simulations.

See Plots.jl for the list of keyword arguments.

Examples

SDDP.publication_plot(simulations; title = "My title") do data
    return data[:stage_objective]
end
source

ValueFunction

SDDP.ValueFunctionType
ValueFunction

A representation of the value function. SDDP.jl uses the following unique representation of the value function that is undocumented in the literature.

It supports three types of state variables:

  1. x - convex "resource" states
  2. b - concave "belief" states
  3. y - concave "objective" states

In addition, we have three types of cuts:

  1. Single-cuts (also called "average" cuts in the literature), which involve the risk-adjusted expectation of the cost-to-go.
  2. Multi-cuts, which use a different cost-to-go term for each realization w.
  3. Risk-cuts, which correspond to the facets of the dual interpretation of a coherent risk measure.

Therefore, ValueFunction returns a JuMP model of the following form:

V(x, b, y) = min: μᵀb + νᵀy + θ
             s.t. # "Single" / "Average" cuts
                  μᵀb(j) + νᵀy(j) + θ >= α(j) + xᵀβ(j), ∀ j ∈ J
                  # "Multi" cuts
                  μᵀb(k) + νᵀy(k) + φ(w) >= α(k, w) + xᵀβ(k, w), ∀w ∈ Ω, k ∈ K
                  # "Risk-set" cuts
                  θ ≥ Σ{p(k, w) * φ(w)}_w - μᵀb(k) - νᵀy(k), ∀ k ∈ K
source

evaluate

SDDP.evaluateMethod
evaluate(
    V::ValueFunction,
    point::Dict{Union{Symbol,String},<:Real}
    objective_state = nothing,
    belief_state = nothing
)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source

plot

SDDP.plotFunction
plot(plt::SpaghettiPlot[, filename::String]; open::Bool = true)

The SpaghettiPlot plot plt to filename. If filename is not given, it will be saved to a temporary directory. If open = true, then a browser window will be opened to display the resulting HTML file.

source

write_subproblem_to_file

SDDP.write_subproblem_to_fileFunction
write_subproblem_to_file(
    node::Node,
    filename::String;
    throw_error::Bool = false,
)

Write the subproblem contained in node to the file filename.

The throw_error is an argument used internally by SDDP.jl. If set, an error will be thrown.

Example

SDDP.write_subproblem_to_file(model[1], "subproblem_1.lp")
source

deterministic_equivalent

SDDP.deterministic_equivalentFunction
deterministic_equivalent(
    pg::PolicyGraph{T},
    optimizer = nothing;
    time_limit::Union{Real,Nothing} = 60.0,
)

Form a JuMP model that represents the deterministic equivalent of the problem.

Examples

deterministic_equivalent(model)
deterministic_equivalent(model, HiGHS.Optimizer)
source

write_to_file

SDDP.write_to_fileFunction
write_to_file(
    model::PolicyGraph,
    filename::String;
    compression::MOI.FileFormats.AbstractCompressionScheme =
        MOI.FileFormats.AutomaticCompression(),
    kwargs...
)

Write model to filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.write(::IO, ::PolicyGraph) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.write(::IO, ::PolicyGraph).

Examples

write_to_file(model, "my_model.sof.json"; validation_scenarios = 10)
source

read_from_file

SDDP.read_from_fileFunction
read_from_file(
    filename::String;
    compression::MOI.FileFormats.AbstractCompressionScheme =
        MOI.FileFormats.AutomaticCompression(),
    kwargs...
)::Tuple{PolicyGraph, ValidationScenarios}

Return a tuple containing a PolicyGraph object and a ValidationScenarios read from filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.read(::IO, ::Type{PolicyGraph}) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.read(::IO, ::Type{PolicyGraph}).

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
source

write

Base.writeMethod
Base.write(
    io::IO,
    model::PolicyGraph;
    validation_scenarios::Union{Nothing,Int,ValidationScenarios} = nothing,
    sampling_scheme::AbstractSamplingScheme = InSampleMonteCarlo(),
    kwargs...
)

Write model to io in the StochOptFormat file format.

Pass an Int to validation_scenarios (default nothing) to specify the number of test scenarios to generate using the sampling_scheme sampling scheme. Alternatively, pass a ValidationScenarios object to manually specify the test scenarios to use.

Any additional kwargs passed to write will be stored in the top-level of the resulting StochOptFormat file. Valid arguments include name, author, date, and description.

Compatibility

Warning

THIS FUNCTION IS EXPERIMENTAL. THINGS MAY CHANGE BETWEEN COMMITS. YOU SHOULD NOT RELY ON THIS FUNCTIONALITY AS A LONG-TERM FILE FORMAT (YET).

In addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:

  • JuMP.fix
  • JuMP.set_lower_bound
  • JuMP.set_upper_bound
  • JuMP.set_normalized_rhs
  • Changes to the constant or affine terms in a stage objective.

If your model uses something other than this, this function will silently write an incorrect formulation of the problem.

Examples

open("my_model.sof.json", "w") do io
    write(
        io,
        model;
        validation_scenarios = 10,
        name = "MyModel",
        author = "@odow",
        date = "2020-07-20",
        description = "Example problem for the SDDP.jl documentation",
    )
end
source

read

Base.readMethod
Base.read(
    io::IO,
    ::Type{PolicyGraph};
    bound::Float64 = 1e6,
)::Tuple{PolicyGraph,ValidationScenarios}

Return a tuple containing a PolicyGraph object and a ValidationScenarios read from io in the StochOptFormat file format.

See also: evaluate.

Compatibility

Warning

This function is experimental. Things may change between commits. You should not rely on this functionality as a long-term file format (yet).

In addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:

  • Additive random variables in the constraints or in the objective
  • Multiplicative random variables in the objective

If your model uses something other than this, this function may throw an error or silently build a non-convex model.

Examples

open("my_model.sof.json", "r") do io
    model, validation_scenarios = read(io, PolicyGraph)
end
source

evaluate

SDDP.evaluateMethod
evaluate(
    model::PolicyGraph{T},
    validation_scenarios::ValidationScenarios{T,S},
) where {T,S}

Evaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
train(model; iteration_limit = 100)
simulations = evaluate(model, validation_scenarios)
source

ValidationScenarios

SDDP.ValidationScenariosType
ValidationScenario{T,S}(scenarios::Vector{ValidationScenario{T,S}})

An AbstractSamplingScheme based on a vector of scenarios.

Each scenario is a vector of Tuple{T, S} where the first element is the node to visit and the second element is the realization of the stagewise-independent noise term. Pass nothing if the node is deterministic.

source

ValidationScenario