Example: the milk producer

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

The purpose of this tutorial is to demonstrate how to fit a Markovian policy graph to a univariate stochastic process.

This tutorial uses the following packages:

using SDDP
import HiGHS
import Plots

Background

A company produces milk for sale on a spot market each month. The quantity of milk they produce is uncertain, and so too is the price on the spot market. The company can store unsold milk in a stockpile of dried milk powder.

The spot price is determined by an auction system, and so varies from month to month, but demonstrates serial correlation. In each auction, there is sufficient demand that the milk producer finds a buyer for all their milk, regardless of the quantity they supply. Furthermore, the spot price is independent of the milk producer (they are a small player in the market).

The spot price is highly volatile, and is the result of a process that is out of the control of the company. To counteract their price risk, the company engages in a forward contracting programme.

The forward contracting programme is a deal for physical milk four months in the future.

The futures price is the current spot price, plus some forward contango (the buyers gain certainty that they will receive the milk in the future).

In general, the milk company should forward contract (since they reduce their price risk), however they also have production risk. Therefore, it may be the case that they forward contract a fixed amount, but find that they do not produce enough milk to meet the fixed demand. They are then forced to buy additional milk on the spot market.

The goal of the milk company is to choose the extent to which they forward contract in order to maximise (risk-adjusted) revenues, whilst managing their production risk.

A stochastic process for price

It is outside the scope of this tutorial, but assume that we have gone away and analysed historical data to fit a stochastic process to the sequence of monthly auction spot prices.

One plausible model is a multiplicative auto-regressive model of order one, where the white noise term is modeled by a finite distribution of empirical residuals. We can simulate this stochastic process as follows:

function simulator()
    residuals = [0.0987, 0.199, 0.303, 0.412, 0.530, 0.661, 0.814, 1.010, 1.290]
    residuals = 0.1 * vcat(-residuals, 0.0, residuals)
    scenario = zeros(12)
    y, μ, α = 4.5, 6.0, 0.05
    for t in 1:12
        y = exp((1 - α) * log(y) + α * log(μ) + rand(residuals))
        scenario[t] = clamp(y, 3.0, 9.0)
    end
    return scenario
end

simulator()
12-element Vector{Float64}:
 4.877152280220573
 5.135213391719843
 4.966440082948111
 4.964368557724152
 5.284403670584146
 5.043547800736911
 5.18978488562701
 5.332635650594311
 5.364168094666155
 5.7629080138596676
 5.476455437697704
 5.071431594471001

It may be helpful to visualize a number of simulations of the price process:

plot = Plots.plot(
    [simulator() for _ in 1:500];
    color = "gray",
    opacity = 0.2,
    legend = false,
    xlabel = "Month",
    ylabel = "Price [\$/kg]",
    xlims = (1, 12),
    ylims = (3, 9),
)
Example block output

The prices gradually revert to the mean of $6/kg, and there is high volatility.

We can't incorporate this price process directly into SDDP.jl, but we can fit a SDDP.MarkovianGraph directly from the simulator:

graph = SDDP.MarkovianGraph(simulator; budget = 30, scenarios = 10_000);

Here budget is the number of nodes in the policy graph, and scenarios is the number of simulations to use when estimating the transition probabilities.

The graph contains too many nodes to be show, but we can plot it:

for ((t, price), edges) in graph.nodes
    for ((t′, price′), probability) in edges
        Plots.plot!(
            plot,
            [t, t′],
            [price, price′];
            color = "red",
            width = 3 * probability,
        )
    end
end

plot
Example block output

That looks okay. Try changing budget and scenarios to see how different Markovian policy graphs can be created.

Model

Now that we have a Markovian graph, we can build the model. See if you can work out how we arrived at this formulation by reading the background description. Do all the variables and constraints make sense?

model = SDDP.PolicyGraph(
    graph;
    sense = :Max,
    upper_bound = 1e2,
    optimizer = HiGHS.Optimizer,
) do sp, node
    # Decompose the node into the month (::Int) and spot price (::Float64)
    t, price = node::Tuple{Int,Float64}
    # Transactions on the futures market cost 0.01
    c_transaction = 0.01
    # It costs the company +50% to buy milk on the spot market and deliver to
    # their customers
    c_buy_premium = 1.5
    # Buyer is willing to pay +5% for certainty
    c_contango = 1.05
    # Distribution of production
    Ω_production = range(0.1, 0.2; length = 5)
    c_max_production = 12 * maximum(Ω_production)
    # x_stock: quantity of milk in stock pile
    @variable(sp, 0 <= x_stock, SDDP.State, initial_value = 0)
    # x_forward[i]: quantity of milk for delivery in i months
    @variable(sp, 0 <= x_forward[1:4], SDDP.State, initial_value = 0)
    # u_spot_sell: quantity of milk to sell on spot market
    @variable(sp, 0 <= u_spot_sell <= c_max_production)
    # u_spot_buy: quantity of milk to buy on spot market
    @variable(sp, 0 <= u_spot_buy <= c_max_production)
    # u_spot_buy: quantity of milk to sell on futures market
    c_max_futures = t <= 8 ? c_max_production : 0.0
    @variable(sp, 0 <= u_forward_sell <= c_max_futures)
    # ω_production: production random variable
    @variable(sp, ω_production)
    # Forward contracting constraints:
    @constraint(sp, [i in 1:3], x_forward[i].out == x_forward[i+1].in)
    @constraint(sp, x_forward[4].out == u_forward_sell)
    # Stockpile balance constraint
    @constraint(
        sp,
        x_stock.out ==
        x_stock.in + ω_production + u_spot_buy - x_forward[1].in - u_spot_sell
    )
    # The random variables. `price` comes from the Markov node
    #
    # !!! warning
    #     The elements in Ω MUST be a tuple with 1 or 2 values, where the first
    #     value is `price` and the second value is the random variable for the
    #     current node. If the node is deterministic, use Ω = [(price,)].
    Ω = [(price, p) for p in Ω_production]
    SDDP.parameterize(sp, Ω) do ω
        # Fix the ω_production variable
        fix(ω_production, ω[2])
        @stageobjective(
            sp,
            # Sales on spot market
            ω[1] * (u_spot_sell - c_buy_premium * u_spot_buy) +
            # Sales on futures smarket
            (ω[1] * c_contango - c_transaction) * u_forward_sell
        )
        return
    end
    return
end
A policy graph with 30 nodes.
 Node indices: (1, 4.592565976671366), ..., (12, 7.739773572877688)

Training a policy

Now we have a model, we train a policy. The SDDP.SimulatorSamplingScheme is used in the forward pass. It generates an out-of-sample sequence of prices using simulator and traverses the closest sequence of nodes in the policy graph. When calling SDDP.parameterize for each subproblem, it uses the new out-of-sample price instead of the price associated with the Markov node.

SDDP.train(
    model;
    time_limit = 20,
    risk_measure = 0.5 * SDDP.Expectation() + 0.5 * SDDP.AVaR(0.25),
    sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),
)
-------------------------------------------------------------------
         SDDP.jl (c) Oscar Dowson and contributors, 2017-24
-------------------------------------------------------------------
problem
  nodes           : 30
  state variables : 5
  scenarios       : 8.33984e+11
  existing cuts   : false
options
  solver          : serial mode
  risk measure    : A convex combination of 0.5 * SDDP.Expectation() + 0.5 * SDDP.AVaR(0.25)
  sampling scheme : SDDP.SimulatorSamplingScheme{typeof(Main.simulator)}
subproblem structure
  VariableRef                             : [15, 15]
  AffExpr in MOI.EqualTo{Float64}         : [5, 5]
  VariableRef in MOI.GreaterThan{Float64} : [8, 9]
  VariableRef in MOI.LessThan{Float64}    : [4, 4]
numerical stability report
  matrix range     [1e+00, 1e+00]
  objective range  [1e+00, 1e+01]
  bounds range     [2e+00, 1e+02]
  rhs range        [0e+00, 0e+00]
-------------------------------------------------------------------
 iteration    simulation      bound        time (s)     solves  pid
-------------------------------------------------------------------
         1  -3.831077e+01  5.741167e+01  1.239181e+00       162   1
        61   9.212885e+00  7.875075e+00  2.256369e+00      9882   1
       112   9.432957e+00  7.864819e+00  3.263987e+00     18144   1
       158   9.107643e+00  7.864079e+00  4.330685e+00     25596   1
       200   8.343826e+00  7.863859e+00  5.333460e+00     32400   1
       240   1.028875e+01  7.863791e+00  6.349887e+00     38880   1
       276   9.947658e+00  7.863562e+00  7.365502e+00     44712   1
       309   1.040980e+01  7.863492e+00  8.392151e+00     50058   1
       341   8.007600e+00  7.863477e+00  9.397925e+00     55242   1
       481   1.106888e+01  7.862129e+00  1.443398e+01     77922   1
       597   9.376199e+00  7.862066e+00  1.944670e+01     96714   1
       610   8.061839e+00  7.862066e+00  2.003486e+01     98820   1
-------------------------------------------------------------------
status         : time_limit
total time (s) : 2.003486e+01
total solves   : 98820
best bound     :  7.862066e+00
simulation ci  :  8.917492e+00 ± 2.722373e-01
numeric issues : 0
-------------------------------------------------------------------
Warning

We're intentionally terminating the training early so that the documentation doesn't take too long to build. If you run this example locally, increase the time limit.

Simulating the policy

When simulating the policy, we can also use the SDDP.SimulatorSamplingScheme.

simulations = SDDP.simulate(
    model,
    200,
    Symbol[:x_stock, :u_forward_sell, :u_spot_sell, :u_spot_buy];
    sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),
);

To show how the sampling scheme uses the new out-of-sample price instead of the price associated with the Markov node, compare the index of the Markov state visited in stage 12 of the first simulation:

simulations[1][12][:node_index]
(12, 4.242774096464512)

to the realization of the noise (price, ω) passed to SDDP.parameterize:

simulations[1][12][:noise_term]
(4.467055645405566, 0.1)

Visualizing the policy

Finally, we can plot the policy to gain insight (although note that we terminated the training early, so we should run the re-train the policy for more iterations before making too many judgements).

plot = Plots.plot(
    SDDP.publication_plot(simulations; title = "x_stock.out") do data
        return data[:x_stock].out
    end,
    SDDP.publication_plot(simulations; title = "u_forward_sell") do data
        return data[:u_forward_sell]
    end,
    SDDP.publication_plot(simulations; title = "u_spot_buy") do data
        return data[:u_spot_buy]
    end,
    SDDP.publication_plot(simulations; title = "u_spot_sell") do data
        return data[:u_spot_sell]
    end;
    layout = (2, 2),
)
Example block output

Next steps

  • Train the policy for longer. What do you observe?
  • Try creating different Markovian graphs. What happens if you add more nodes?
  • Try different risk measures