Infinite horizon trivial

using SDDP, HiGHS, Test

function infinite_trivial()
    graph = SDDP.Graph(
        :root_node,
        [:week],
        [(:root_node => :week, 1.0), (:week => :week, 0.9)],
    )
    model = SDDP.PolicyGraph(
        graph,
        lower_bound = 0.0,
        optimizer = HiGHS.Optimizer,
    ) do subproblem, node
        @variable(subproblem, state, SDDP.State, initial_value = 0)
        @constraint(subproblem, state.in == state.out)
        @stageobjective(subproblem, 2.0)
    end
    SDDP.train(model; iteration_limit = 100, log_frequency = 10)
    @test SDDP.calculate_bound(model) ≈ 2.0 / (1 - 0.9) atol = 1e-3
    return
end

infinite_trivial()
------------------------------------------------------------------------------
          SDDP.jl (c) Oscar Dowson and SDDP.jl contributors, 2017-23

Problem
  Nodes           : 1
  State variables : 1
  Scenarios       : Inf
  Existing cuts   : false
  Subproblem structure                      : (min, max)
    Variables                               : (3, 3)
    VariableRef in MOI.GreaterThan{Float64} : (1, 1)
    AffExpr in MOI.EqualTo{Float64}         : (1, 1)
Options
  Solver          : serial mode
  Risk measure    : SDDP.Expectation()
  Sampling scheme : SDDP.InSampleMonteCarlo

Numerical stability report
  Non-zero Matrix range     [1e+00, 1e+00]
  Non-zero Objective range  [1e+00, 1e+00]
  Non-zero Bounds range     [0e+00, 0e+00]
  Non-zero RHS range        [0e+00, 0e+00]
No problems detected

 Iteration    Simulation       Bound         Time (s)    Proc. ID   # Solves
       10    1.400000e+01   1.999401e+01   1.494789e-02          1        162
       20    3.800000e+01   2.000000e+01   3.907990e-02          1        420
       30    2.800000e+01   2.000000e+01   6.074405e-02          1        628
       40    2.000000e+00   2.000000e+01   8.367896e-02          1        778
       50    2.000000e+00   2.000000e+01   2.428410e-01          1       1052
       60    2.000000e+00   2.000000e+01   4.763510e-01          1       1208
       70    1.000000e+01   2.000000e+01   7.111919e-01          1       1336
       80    6.000000e+00   2.000000e+01   1.280383e+00          1       1558
       90    4.000000e+01   2.000000e+01   2.053485e+00          1       1774
      100    1.200000e+01   2.000000e+01   3.098388e+00          1       1986

Terminating training
  Status         : iteration_limit
  Total time (s) : 3.098388e+00
  Total solves   : 1986
  Best bound     :  2.000000e+01
  Simulation CI  :  1.886000e+01 ± 3.436515e+00
------------------------------------------------------------------------------

This page was generated using Literate.jl.