Air conditioning
This tutorial was generated using Literate.jl. Download the source as a .jl
file. Download the source as a .ipynb
file.
Taken from Anthony Papavasiliou's notes on SDDP
Consider the following problem
- Produce air conditioners for 3 months
- 200 units/month at 100 $/unit
- Overtime costs 300 $/unit
- Known demand of 100 units for period 1
- Equally likely demand, 100 or 300 units, for periods 2, 3
- Storage cost is 50 $/unit
- All demand must be met
The known optimal solution is $62,500
using SDDP, HiGHS, Test
function air_conditioning_model(duality_handler)
model = SDDP.LinearPolicyGraph(;
stages = 3,
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do sp, stage
@variable(
sp,
0 <= stored_production <= 100,
Int,
SDDP.State,
initial_value = 0
)
@variable(sp, 0 <= production <= 200, Int)
@variable(sp, overtime >= 0, Int)
@variable(sp, demand)
DEMAND = [[100.0], [100.0, 300.0], [100.0, 300.0]]
SDDP.parameterize(ω -> JuMP.fix(demand, ω), sp, DEMAND[stage])
@constraint(
sp,
stored_production.out ==
stored_production.in + production + overtime - demand
)
@stageobjective(
sp,
100 * production + 300 * overtime + 50 * stored_production.out
)
end
SDDP.train(model; duality_handler = duality_handler)
@test isapprox(SDDP.calculate_bound(model), 62_500.0, atol = 0.1)
return
end
for duality_handler in [SDDP.LagrangianDuality(), SDDP.ContinuousConicDuality()]
air_conditioning_model(duality_handler)
end
-------------------------------------------------------------------
SDDP.jl (c) Oscar Dowson and contributors, 2017-24
-------------------------------------------------------------------
problem
nodes : 3
state variables : 1
scenarios : 4.00000e+00
existing cuts : false
options
solver : serial mode
risk measure : SDDP.Expectation()
sampling scheme : SDDP.InSampleMonteCarlo
subproblem structure
VariableRef : [6, 6]
AffExpr in MOI.EqualTo{Float64} : [1, 1]
VariableRef in MOI.GreaterThan{Float64} : [4, 4]
VariableRef in MOI.Integer : [3, 3]
VariableRef in MOI.LessThan{Float64} : [2, 3]
numerical stability report
matrix range [1e+00, 1e+00]
objective range [1e+00, 3e+02]
bounds range [1e+02, 2e+02]
rhs range [0e+00, 0e+00]
-------------------------------------------------------------------
iteration simulation bound time (s) solves pid
-------------------------------------------------------------------
1L 7.000000e+04 6.166667e+04 5.877728e-01 8 1
40L 5.500000e+04 6.250000e+04 8.389909e-01 344 1
-------------------------------------------------------------------
status : simulation_stopping
total time (s) : 8.389909e-01
total solves : 344
best bound : 6.250000e+04
simulation ci : 6.091250e+04 ± 6.325667e+03
numeric issues : 0
-------------------------------------------------------------------
-------------------------------------------------------------------
SDDP.jl (c) Oscar Dowson and contributors, 2017-24
-------------------------------------------------------------------
problem
nodes : 3
state variables : 1
scenarios : 4.00000e+00
existing cuts : false
options
solver : serial mode
risk measure : SDDP.Expectation()
sampling scheme : SDDP.InSampleMonteCarlo
subproblem structure
VariableRef : [6, 6]
AffExpr in MOI.EqualTo{Float64} : [1, 1]
VariableRef in MOI.GreaterThan{Float64} : [4, 4]
VariableRef in MOI.Integer : [3, 3]
VariableRef in MOI.LessThan{Float64} : [2, 3]
numerical stability report
matrix range [1e+00, 1e+00]
objective range [1e+00, 3e+02]
bounds range [1e+02, 2e+02]
rhs range [0e+00, 0e+00]
-------------------------------------------------------------------
iteration simulation bound time (s) solves pid
-------------------------------------------------------------------
1 3.000000e+04 6.250000e+04 4.208088e-03 8 1
20 4.000000e+04 6.250000e+04 5.128121e-02 172 1
-------------------------------------------------------------------
status : simulation_stopping
total time (s) : 5.128121e-02
total solves : 172
best bound : 6.250000e+04
simulation ci : 5.650000e+04 ± 6.785916e+03
numeric issues : 0
-------------------------------------------------------------------