Uncertainty in the objective function
In the previous tutorial, An introduction to SDDP.jl, we created a stochastic hydro-thermal scheduling model. In this tutorial, we extend the problem by adding uncertainty to the fuel costs.
Previously, we assumed that the fuel cost was deterministic: $50/MWh in the first stage, $100/MWh in the second stage, and $150/MWh in the third stage. For this tutorial, we assume that in addition to these base costs, the actual fuel cost is correlated with the inflows.
Our new model for the uncertinty is given by the following table:
| ω | 1 | 2 | 3 |
|---|---|---|---|
| P(ω) | 1/3 | 1/3 | 1/3 |
| inflow | 0 | 50 | 100 |
| fuel_multiplier | 1.5 | 1.0 | 0.75 |
In stage t, the objective is now to minimize:
fuel_multiplier * fuel_cost[t] * thermal_generation
Creating a model
To add an uncertain objective, we can simply call @stageobjective from inside the SDDP.parameterize function.
using SDDP, HiGHS
model = SDDP.LinearPolicyGraph(
stages = 3,
sense = :Min,
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do subproblem, t
# Define the state variable.
@variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)
# Define the control variables.
@variables(subproblem, begin
thermal_generation >= 0
hydro_generation >= 0
hydro_spill >= 0
inflow
end)
# Define the constraints
@constraints(
subproblem,
begin
volume.out == volume.in + inflow - hydro_generation - hydro_spill
thermal_generation + hydro_generation == 150.0
end
)
fuel_cost = [50.0, 100.0, 150.0]
# Parameterize the subproblem.
Ω = [
(inflow = 0.0, fuel_multiplier = 1.5),
(inflow = 50.0, fuel_multiplier = 1.0),
(inflow = 100.0, fuel_multiplier = 0.75),
]
SDDP.parameterize(subproblem, Ω, [1 / 3, 1 / 3, 1 / 3]) do ω
JuMP.fix(inflow, ω.inflow)
@stageobjective(
subproblem,
ω.fuel_multiplier * fuel_cost[t] * thermal_generation
)
end
endA policy graph with 3 nodes.
Node indices: 1, 2, 3
Training and simulating the policy
As in the previous two tutorials, we train and simulate the policy:
SDDP.train(model; iteration_limit = 10)
simulations = SDDP.simulate(model, 500)
objective_values =
[sum(stage[:stage_objective] for stage in sim) for sim in simulations]
using Statistics
μ = round(mean(objective_values), digits = 2)
ci = round(1.96 * std(objective_values) / sqrt(500), digits = 2)
println("Confidence interval: ", μ, " ± ", ci)
println("Lower bound: ", round(SDDP.calculate_bound(model), digits = 2))------------------------------------------------------------------------------
SDDP.jl (c) Oscar Dowson and SDDP.jl contributors, 2017-23
Problem
Nodes : 3
State variables : 1
Scenarios : 2.70000e+01
Existing cuts : false
Subproblem structure : (min, max)
Variables : (7, 7)
VariableRef in MOI.LessThan{Float64} : (1, 2)
VariableRef in MOI.GreaterThan{Float64} : (5, 5)
AffExpr in MOI.EqualTo{Float64} : (2, 2)
Options
Solver : serial mode
Risk measure : SDDP.Expectation()
Sampling scheme : SDDP.InSampleMonteCarlo
Numerical stability report
Non-zero Matrix range [1e+00, 1e+00]
Non-zero Objective range [1e+00, 2e+02]
Non-zero Bounds range [2e+02, 2e+02]
Non-zero RHS range [2e+02, 2e+02]
No problems detected
Iteration Simulation Bound Time (s) Proc. ID # Solves
1 3.750000e+04 3.958333e+03 3.077984e-03 1 12
2 8.365385e+03 1.032738e+04 4.402876e-03 1 24
3 1.125000e+04 1.050595e+04 5.432844e-03 1 36
4 1.875000e+03 1.062500e+04 6.860018e-03 1 48
5 5.000000e+03 1.062500e+04 8.282900e-03 1 60
6 1.875000e+03 1.062500e+04 9.693861e-03 1 72
7 5.000000e+03 1.062500e+04 1.122689e-02 1 84
8 1.875000e+03 1.062500e+04 1.265097e-02 1 96
9 1.125000e+04 1.062500e+04 1.399398e-02 1 108
10 1.875000e+03 1.062500e+04 1.536202e-02 1 120
Terminating training
Status : iteration_limit
Total time (s) : 1.536202e-02
Total solves : 120
Best bound : 1.062500e+04
Simulation CI : 8.586538e+03 ± 6.714190e+03
------------------------------------------------------------------------------
Confidence interval: 10310.54 ± 645.89
Lower bound: 10625.0This page was generated using Literate.jl.