From 90319bb08fd4f89991b5ba7fa14e3d88ec2b6661 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Thu, 28 Nov 2024 01:37:10 +0000 Subject: [PATCH] build based on fd01516 --- previews/PR810/.documenter-siteinfo.json | 2 +- previews/PR810/apireference/index.html | 108 +-- previews/PR810/changelog/index.html | 2 +- .../examples/FAST_hydro_thermal/index.html | 8 +- .../FAST_production_management/index.html | 2 +- .../PR810/examples/FAST_quickstart/index.html | 2 +- .../PR810/examples/Hydro_thermal/index.html | 16 +- previews/PR810/examples/SDDP.log | 808 +++++++++--------- previews/PR810/examples/SDDP_0.0.log | 6 +- previews/PR810/examples/SDDP_0.0625.log | 6 +- previews/PR810/examples/SDDP_0.125.log | 6 +- previews/PR810/examples/SDDP_0.25.log | 6 +- previews/PR810/examples/SDDP_0.375.log | 6 +- previews/PR810/examples/SDDP_0.5.log | 6 +- previews/PR810/examples/SDDP_0.625.log | 6 +- previews/PR810/examples/SDDP_0.75.log | 6 +- previews/PR810/examples/SDDP_0.875.log | 6 +- previews/PR810/examples/SDDP_1.0.log | 6 +- .../index.html | 24 +- .../index.html | 18 +- .../index.html | 14 +- .../index.html | 14 +- .../agriculture_mccardle_farm/index.html | 2 +- .../examples/air_conditioning/index.html | 14 +- .../air_conditioning_forward/index.html | 2 +- previews/PR810/examples/all_blacks/index.html | 8 +- .../asset_management_simple/index.html | 20 +- .../asset_management_stagewise/index.html | 24 +- previews/PR810/examples/belief/index.html | 24 +- .../examples/biobjective_hydro/index.html | 62 +- .../examples/booking_management/index.html | 2 +- .../examples/generation_expansion/index.html | 30 +- .../PR810/examples/hydro_valley/index.html | 2 +- .../infinite_horizon_hydro_thermal/index.html | 20 +- .../infinite_horizon_trivial/index.html | 12 +- .../examples/no_strong_duality/index.html | 8 +- .../objective_state_newsvendor/index.html | 301 ++++--- .../examples/sldp_example_one/index.html | 25 +- .../examples/sldp_example_two/index.html | 38 +- .../examples/stochastic_all_blacks/index.html | 10 +- .../examples/the_farmers_problem/index.html | 10 +- .../examples/vehicle_location/index.html | 2 +- previews/PR810/explanation/risk/index.html | 14 +- .../PR810/explanation/theory_intro/index.html | 542 ++++++------ .../access_previous_variables/index.html | 2 +- .../index.html | 2 +- .../guides/add_a_risk_measure/index.html | 16 +- .../PR810/guides/add_integrality/index.html | 2 +- .../add_multidimensional_noise/index.html | 2 +- .../index.html | 2 +- .../guides/choose_a_stopping_rule/index.html | 2 +- .../guides/create_a_belief_state/index.html | 2 +- .../create_a_general_policy_graph/index.html | 2 +- .../PR810/guides/debug_a_model/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- previews/PR810/index.html | 2 +- previews/PR810/release_notes/index.html | 2 +- previews/PR810/tutorial/SDDP.log | 394 ++++----- previews/PR810/tutorial/arma/index.html | 58 +- previews/PR810/tutorial/convex.cuts.json | 2 +- .../PR810/tutorial/decision_hazard/index.html | 2 +- .../example_milk_producer/1f5e3ff8.svg | 544 ------------ .../example_milk_producer/904a9542.svg | 544 ++++++++++++ .../{7850db0e.svg => db517f47.svg} | 232 ++--- .../example_milk_producer/ea7e99eb.svg | 625 -------------- .../example_milk_producer/ff3c9001.svg | 625 ++++++++++++++ .../tutorial/example_milk_producer/index.html | 68 +- .../tutorial/example_newsvendor/038b0b3e.svg | 37 - .../tutorial/example_newsvendor/41e9a93f.svg | 37 + .../tutorial/example_newsvendor/826cb904.svg | 88 ++ .../tutorial/example_newsvendor/955999b3.svg | 96 --- .../tutorial/example_newsvendor/index.html | 188 ++-- .../{3e8ca7db.svg => 228c2edf.svg} | 64 +- .../{2f8469c5.svg => 2ce02528.svg} | 64 +- .../{ec3b2cff.svg => 679b6d0a.svg} | 268 +++--- .../{024c3bed.svg => 70fda81e.svg} | 76 +- .../{e7d306a2.svg => 7cf51f63.svg} | 172 ++-- .../tutorial/example_reservoir/8aa4be7d.svg | 86 ++ .../{8d414c17.svg => 9d19c527.svg} | 76 +- .../tutorial/example_reservoir/ab1d655a.svg | 86 -- .../tutorial/example_reservoir/index.html | 95 +- .../PR810/tutorial/first_steps/index.html | 34 +- .../inventory/{64851468.svg => 60ce68dd.svg} | 86 +- .../inventory/{e6aeb0c9.svg => e23b5f9f.svg} | 72 +- previews/PR810/tutorial/inventory/index.html | 70 +- .../tutorial/markov_uncertainty/index.html | 10 +- previews/PR810/tutorial/mdps/index.html | 18 +- .../tutorial/objective_states/index.html | 40 +- .../tutorial/objective_uncertainty/index.html | 12 +- previews/PR810/tutorial/pglib_opf/index.html | 39 +- .../plotting/{0a447071.svg => 429b743e.svg} | 130 +-- previews/PR810/tutorial/plotting/index.html | 10 +- previews/PR810/tutorial/spaghetti_plot.html | 2 +- previews/PR810/tutorial/warnings/index.html | 14 +- 95 files changed, 3702 insertions(+), 3654 deletions(-) delete mode 100644 previews/PR810/tutorial/example_milk_producer/1f5e3ff8.svg create mode 100644 previews/PR810/tutorial/example_milk_producer/904a9542.svg rename previews/PR810/tutorial/example_milk_producer/{7850db0e.svg => db517f47.svg} (60%) delete mode 100644 previews/PR810/tutorial/example_milk_producer/ea7e99eb.svg create mode 100644 previews/PR810/tutorial/example_milk_producer/ff3c9001.svg delete mode 100644 previews/PR810/tutorial/example_newsvendor/038b0b3e.svg create mode 100644 previews/PR810/tutorial/example_newsvendor/41e9a93f.svg create mode 100644 previews/PR810/tutorial/example_newsvendor/826cb904.svg delete mode 100644 previews/PR810/tutorial/example_newsvendor/955999b3.svg rename previews/PR810/tutorial/example_reservoir/{3e8ca7db.svg => 228c2edf.svg} (85%) rename previews/PR810/tutorial/example_reservoir/{2f8469c5.svg => 2ce02528.svg} (85%) rename previews/PR810/tutorial/example_reservoir/{ec3b2cff.svg => 679b6d0a.svg} (76%) rename previews/PR810/tutorial/example_reservoir/{024c3bed.svg => 70fda81e.svg} (85%) rename previews/PR810/tutorial/example_reservoir/{e7d306a2.svg => 7cf51f63.svg} (84%) create mode 100644 previews/PR810/tutorial/example_reservoir/8aa4be7d.svg rename previews/PR810/tutorial/example_reservoir/{8d414c17.svg => 9d19c527.svg} (85%) delete mode 100644 previews/PR810/tutorial/example_reservoir/ab1d655a.svg rename previews/PR810/tutorial/inventory/{64851468.svg => 60ce68dd.svg} (84%) rename previews/PR810/tutorial/inventory/{e6aeb0c9.svg => e23b5f9f.svg} (84%) rename previews/PR810/tutorial/plotting/{0a447071.svg => 429b743e.svg} (84%) diff --git a/previews/PR810/.documenter-siteinfo.json b/previews/PR810/.documenter-siteinfo.json index 77be54e19..9d3ac3bf9 100644 --- a/previews/PR810/.documenter-siteinfo.json +++ b/previews/PR810/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-28T01:14:04","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-28T01:36:57","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/previews/PR810/apireference/index.html b/previews/PR810/apireference/index.html index d8b50e886..9724fe7b5 100644 --- a/previews/PR810/apireference/index.html +++ b/previews/PR810/apireference/index.html @@ -25,7 +25,7 @@ Nodes {} Arcs - {}source
SDDP.add_nodeFunction
add_node(graph::Graph{T}, node::T) where {T}

Add a node to the graph graph.

Examples

julia> graph = SDDP.Graph(:root);
+ {}
source
SDDP.add_nodeFunction
add_node(graph::Graph{T}, node::T) where {T}

Add a node to the graph graph.

Examples

julia> graph = SDDP.Graph(:root);
 
 julia> SDDP.add_node(graph, :A)
 
@@ -45,7 +45,7 @@
 Nodes
  2
 Arcs
- {}
source
SDDP.add_edgeFunction
add_edge(graph::Graph{T}, edge::Pair{T, T}, probability::Float64) where {T}

Add an edge to the graph graph.

Examples

julia> graph = SDDP.Graph(0);
+ {}
source
SDDP.add_edgeFunction
add_edge(graph::Graph{T}, edge::Pair{T, T}, probability::Float64) where {T}

Add an edge to the graph graph.

Examples

julia> graph = SDDP.Graph(0);
 
 julia> SDDP.add_node(graph, 1)
 
@@ -69,7 +69,7 @@
 Nodes
  A
 Arcs
- root => A w.p. 1.0
source
SDDP.add_ambiguity_setFunction
add_ambiguity_set(
+ root => A w.p. 1.0
source
SDDP.add_ambiguity_setFunction
add_ambiguity_set(
     graph::Graph{T},
     set::Vector{T},
     lipschitz::Vector{Float64},
@@ -102,7 +102,7 @@
  2 => 3 w.p. 1.0
 Partitions
  {1, 2}
- {3}
source
add_ambiguity_set(graph::Graph{T}, set::Vector{T}, lipschitz::Float64)

Add set to the belief partition of graph.

lipschitz is a Lipschitz constant for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.

Examples

julia> graph = SDDP.LinearGraph(3);
+ {3}
source
add_ambiguity_set(graph::Graph{T}, set::Vector{T}, lipschitz::Float64)

Add set to the belief partition of graph.

lipschitz is a Lipschitz constant for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.

Examples

julia> graph = SDDP.LinearGraph(3);
 
 julia> SDDP.add_ambiguity_set(graph, [1, 2], 1e3)
 
@@ -121,7 +121,7 @@
  2 => 3 w.p. 1.0
 Partitions
  {1, 2}
- {3}
source
SDDP.LinearGraphFunction
LinearGraph(stages::Int)

Create a linear graph with stages number of nodes.

Examples

julia> graph = SDDP.LinearGraph(3)
+ {3}
source
SDDP.LinearGraphFunction
LinearGraph(stages::Int)

Create a linear graph with stages number of nodes.

Examples

julia> graph = SDDP.LinearGraph(3)
 Root
  0
 Nodes
@@ -131,7 +131,7 @@
 Arcs
  0 => 1 w.p. 1.0
  1 => 2 w.p. 1.0
- 2 => 3 w.p. 1.0
source
SDDP.MarkovianGraphFunction
MarkovianGraph(transition_matrices::Vector{Matrix{Float64}})

Construct a Markovian graph from the vector of transition matrices.

transition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t.

The dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.

Examples

julia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]])
+ 2 => 3 w.p. 1.0
source
SDDP.MarkovianGraphFunction
MarkovianGraph(transition_matrices::Vector{Matrix{Float64}})

Construct a Markovian graph from the vector of transition matrices.

transition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t.

The dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.

Examples

julia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]])
 Root
  (0, 1)
 Nodes
@@ -147,7 +147,7 @@
  (2, 1) => (3, 1) w.p. 0.8
  (2, 1) => (3, 2) w.p. 0.2
  (2, 2) => (3, 1) w.p. 0.2
- (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(;
+ (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(;
     stages::Int,
     transition_matrix::Matrix{Float64},
     root_node_transition::Vector{Float64},
@@ -175,11 +175,11 @@
  (2, 1) => (3, 1) w.p. 0.8
  (2, 1) => (3, 2) w.p. 0.2
  (2, 2) => (3, 1) w.p. 0.2
- (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(
+ (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(
     simulator::Function;
     budget::Union{Int,Vector{Int}},
     scenarios::Int = 1000,
-)

Construct a Markovian graph by fitting Markov chain to scenarios generated by simulator().

budget is the total number of nodes in the resulting Markov chain. This can either be specified as a single Int, in which case we will attempt to intelligently distributed the nodes between stages. Alternatively, budget can be a Vector{Int}, which details the number of Markov state to have in each stage.

source
SDDP.UnicyclicGraphFunction
UnicyclicGraph(discount_factor::Float64; num_nodes::Int = 1)

Construct a graph composed of num_nodes nodes that form a single cycle, with a probability of discount_factor of continuing the cycle.

Examples

julia> graph = SDDP.UnicyclicGraph(0.9; num_nodes = 2)
+)

Construct a Markovian graph by fitting Markov chain to scenarios generated by simulator().

budget is the total number of nodes in the resulting Markov chain. This can either be specified as a single Int, in which case we will attempt to intelligently distributed the nodes between stages. Alternatively, budget can be a Vector{Int}, which details the number of Markov state to have in each stage.

source
SDDP.UnicyclicGraphFunction
UnicyclicGraph(discount_factor::Float64; num_nodes::Int = 1)

Construct a graph composed of num_nodes nodes that form a single cycle, with a probability of discount_factor of continuing the cycle.

Examples

julia> graph = SDDP.UnicyclicGraph(0.9; num_nodes = 2)
 Root
  0
 Nodes
@@ -188,7 +188,7 @@
 Arcs
  0 => 1 w.p. 1.0
  1 => 2 w.p. 1.0
- 2 => 1 w.p. 0.9
source
SDDP.LinearPolicyGraphFunction
LinearPolicyGraph(builder::Function; stages::Int, kwargs...)

Create a linear policy graph with stages number of stages.

Keyword arguments

  • stages: the number of stages in the graph

  • kwargs: other keyword arguments are passed to SDDP.PolicyGraph.

Examples

julia> SDDP.LinearPolicyGraph(; stages = 2, lower_bound = 0.0) do sp, t
+ 2 => 1 w.p. 0.9
source
SDDP.LinearPolicyGraphFunction
LinearPolicyGraph(builder::Function; stages::Int, kwargs...)

Create a linear policy graph with stages number of stages.

Keyword arguments

  • stages: the number of stages in the graph

  • kwargs: other keyword arguments are passed to SDDP.PolicyGraph.

Examples

julia> SDDP.LinearPolicyGraph(; stages = 2, lower_bound = 0.0) do sp, t
     # ... build model ...
 end
 A policy graph with 2 nodes.
@@ -198,7 +198,7 @@
     # ... build model ...
 end
 A policy graph with 2 nodes.
-Node indices: 1, 2
source
SDDP.MarkovianPolicyGraphFunction
MarkovianPolicyGraph(
+Node indices: 1, 2
source
SDDP.MarkovianPolicyGraphFunction
MarkovianPolicyGraph(
     builder::Function;
     transition_matrices::Vector{Array{Float64,2}},
     kwargs...
@@ -215,7 +215,7 @@
     # ... build model ...
 end
 A policy graph with 5 nodes.
- Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)
source
SDDP.PolicyGraphType
PolicyGraph(
+ Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)
source
SDDP.PolicyGraphType
PolicyGraph(
     builder::Function,
     graph::Graph{T};
     sense::Symbol = :Min,
@@ -237,28 +237,28 @@
     optimizer = HiGHS.Optimizer,
 ) do subproblem, index
     # ... subproblem definitions ...
-end
source

Subproblem definition

SDDP.@stageobjectiveMacro
@stageobjective(subproblem, expr)

Set the stage-objective of subproblem to expr.

Examples

@stageobjective(subproblem, 2x + y)
source
SDDP.parameterizeFunction
parameterize(
+end
source

Subproblem definition

SDDP.@stageobjectiveMacro
@stageobjective(subproblem, expr)

Set the stage-objective of subproblem to expr.

Examples

@stageobjective(subproblem, 2x + y)
source
SDDP.parameterizeFunction
parameterize(
     modify::Function,
     subproblem::JuMP.Model,
     realizations::Vector{T},
     probability::Vector{Float64} = fill(1.0 / length(realizations))
 ) where {T}

Add a parameterization function modify to subproblem. The modify function takes one argument and modifies subproblem based on the realization of the noise sampled from realizations with corresponding probabilities probability.

In order to conduct an out-of-sample simulation, modify should accept arguments that are not in realizations (but still of type T).

Examples

SDDP.parameterize(subproblem, [1, 2, 3], [0.4, 0.3, 0.3]) do ω
     JuMP.set_upper_bound(x, ω)
-end
source
parameterize(node::Node, noise)

Parameterize node node with the noise noise.

source
SDDP.add_objective_stateFunction
add_objective_state(update::Function, subproblem::JuMP.Model; kwargs...)

Add an objective state variable to subproblem.

Required kwargs are:

  • initial_value: The initial value of the objective state variable at the root node.
  • lipschitz: The lipschitz constant of the objective state variable.

Setting a tight value for the lipschitz constant can significantly improve the speed of convergence.

Optional kwargs are:

  • lower_bound: A valid lower bound for the objective state variable. Can be -Inf.
  • upper_bound: A valid upper bound for the objective state variable. Can be +Inf.

Setting tight values for these optional variables can significantly improve the speed of convergence.

If the objective state is N-dimensional, each keyword argument must be an NTuple{N,Float64}. For example, initial_value = (0.0, 1.0).

source
SDDP.objective_stateFunction
objective_state(subproblem::JuMP.Model)

Return the current objective state of the problem.

Can only be called from SDDP.parameterize.

source
SDDP.NoiseType
Noise(support, probability)

An atom of a discrete random variable at the point of support support and associated probability probability.

source

Training the policy

SDDP.numerical_stability_reportFunction
numerical_stability_report(
+end
source
parameterize(node::Node, noise)

Parameterize node node with the noise noise.

source
SDDP.add_objective_stateFunction
add_objective_state(update::Function, subproblem::JuMP.Model; kwargs...)

Add an objective state variable to subproblem.

Required kwargs are:

  • initial_value: The initial value of the objective state variable at the root node.
  • lipschitz: The lipschitz constant of the objective state variable.

Setting a tight value for the lipschitz constant can significantly improve the speed of convergence.

Optional kwargs are:

  • lower_bound: A valid lower bound for the objective state variable. Can be -Inf.
  • upper_bound: A valid upper bound for the objective state variable. Can be +Inf.

Setting tight values for these optional variables can significantly improve the speed of convergence.

If the objective state is N-dimensional, each keyword argument must be an NTuple{N,Float64}. For example, initial_value = (0.0, 1.0).

source
SDDP.objective_stateFunction
objective_state(subproblem::JuMP.Model)

Return the current objective state of the problem.

Can only be called from SDDP.parameterize.

source
SDDP.NoiseType
Noise(support, probability)

An atom of a discrete random variable at the point of support support and associated probability probability.

source

Training the policy

SDDP.numerical_stability_reportFunction
numerical_stability_report(
     [io::IO = stdout,]
     model::PolicyGraph;
     by_node::Bool = false,
     print::Bool = true,
     warn::Bool = true,
-)

Print a report identifying possible numeric stability issues.

Keyword arguments

  • If by_node, print a report for each node in the graph.

  • If print, print to io.

  • If warn, warn if the coefficients may cause numerical issues.

source
SDDP.trainFunction
SDDP.train(model::PolicyGraph; kwargs...)

Train the policy for model.

Keyword arguments

  • iteration_limit::Int: number of iterations to conduct before termination.

  • time_limit::Float64: number of seconds to train before termination.

  • stoping_rules: a vector of SDDP.AbstractStoppingRules. Defaults to SimulationStoppingRule.

  • print_level::Int: control the level of printing to the screen. Defaults to 1. Set to 0 to disable all printing.

  • log_file::String: filepath at which to write a log of the training progress. Defaults to SDDP.log.

  • log_frequency::Int: control the frequency with which the logging is outputted (iterations/log). It must be at least 1. Defaults to 1.

  • log_every_seconds::Float64: control the frequency with which the logging is outputted (seconds/log). Defaults to 0.0.

  • log_every_iteration::Bool; over-rides log_frequency and log_every_seconds to force every iteration to be printed. Defaults to false.

  • run_numerical_stability_report::Bool: generate (and print) a numerical stability report prior to solve. Defaults to true.

  • refine_at_similar_nodes::Bool: if SDDP can detect that two nodes have the same children, it can cheaply add a cut discovered at one to the other. In almost all cases this should be set to true.

  • cut_deletion_minimum::Int: the minimum number of cuts to cache before deleting cuts from the subproblem. The impact on performance is solver specific; however, smaller values result in smaller subproblems (and therefore quicker solves), at the expense of more time spent performing cut selection.

  • risk_measure: the risk measure to use at each node. Defaults to Expectation.

  • root_node_risk_measure::AbstractRiskMeasure: the risk measure to use at the root node when computing the Bound column. Note that the choice of this option does not change the primal policy, and it applies only if the transition from the root node to the first stage is stochastic. Defaults to Expectation.

  • sampling_scheme: a sampling scheme to use on the forward pass of the algorithm. Defaults to InSampleMonteCarlo.

  • backward_sampling_scheme: a backward pass sampling scheme to use on the backward pass of the algorithm. Defaults to CompleteSampler.

  • cut_type: choose between SDDP.SINGLE_CUT and SDDP.MULTI_CUT versions of SDDP.

  • dashboard::Bool: open a visualization of the training over time. Defaults to false.

  • parallel_scheme::AbstractParallelScheme: specify a scheme for solving in parallel. Defaults to Threaded().

  • forward_pass::AbstractForwardPass: specify a scheme to use for the forward passes.

  • forward_pass_resampling_probability::Union{Nothing,Float64}: set to a value in (0, 1) to enable RiskAdjustedForwardPass. Defaults to nothing (disabled).

  • add_to_existing_cuts::Bool: set to true to allow training a model that was previously trained. Defaults to false.

  • duality_handler::AbstractDualityHandler: specify a duality handler to use when creating cuts.

  • post_iteration_callback::Function: a callback with the signature post_iteration_callback(::IterationResult) that is evaluated after each iteration of the algorithm.

There is also a special option for infinite horizon problems

  • cycle_discretization_delta: the maximum distance between states allowed on the forward pass. This is for advanced users only and needs to be used in conjunction with a different sampling_scheme.
source
SDDP.termination_statusFunction
termination_status(model::PolicyGraph)::Symbol

Query the reason why the training stopped.

source
SDDP.write_cuts_to_fileFunction
write_cuts_to_file(
+)

Print a report identifying possible numeric stability issues.

Keyword arguments

  • If by_node, print a report for each node in the graph.

  • If print, print to io.

  • If warn, warn if the coefficients may cause numerical issues.

source
SDDP.trainFunction
SDDP.train(model::PolicyGraph; kwargs...)

Train the policy for model.

Keyword arguments

  • iteration_limit::Int: number of iterations to conduct before termination.

  • time_limit::Float64: number of seconds to train before termination.

  • stoping_rules: a vector of SDDP.AbstractStoppingRules. Defaults to SimulationStoppingRule.

  • print_level::Int: control the level of printing to the screen. Defaults to 1. Set to 0 to disable all printing.

  • log_file::String: filepath at which to write a log of the training progress. Defaults to SDDP.log.

  • log_frequency::Int: control the frequency with which the logging is outputted (iterations/log). It must be at least 1. Defaults to 1.

  • log_every_seconds::Float64: control the frequency with which the logging is outputted (seconds/log). Defaults to 0.0.

  • log_every_iteration::Bool; over-rides log_frequency and log_every_seconds to force every iteration to be printed. Defaults to false.

  • run_numerical_stability_report::Bool: generate (and print) a numerical stability report prior to solve. Defaults to true.

  • refine_at_similar_nodes::Bool: if SDDP can detect that two nodes have the same children, it can cheaply add a cut discovered at one to the other. In almost all cases this should be set to true.

  • cut_deletion_minimum::Int: the minimum number of cuts to cache before deleting cuts from the subproblem. The impact on performance is solver specific; however, smaller values result in smaller subproblems (and therefore quicker solves), at the expense of more time spent performing cut selection.

  • risk_measure: the risk measure to use at each node. Defaults to Expectation.

  • root_node_risk_measure::AbstractRiskMeasure: the risk measure to use at the root node when computing the Bound column. Note that the choice of this option does not change the primal policy, and it applies only if the transition from the root node to the first stage is stochastic. Defaults to Expectation.

  • sampling_scheme: a sampling scheme to use on the forward pass of the algorithm. Defaults to InSampleMonteCarlo.

  • backward_sampling_scheme: a backward pass sampling scheme to use on the backward pass of the algorithm. Defaults to CompleteSampler.

  • cut_type: choose between SDDP.SINGLE_CUT and SDDP.MULTI_CUT versions of SDDP.

  • dashboard::Bool: open a visualization of the training over time. Defaults to false.

  • parallel_scheme::AbstractParallelScheme: specify a scheme for solving in parallel. Defaults to Threaded().

  • forward_pass::AbstractForwardPass: specify a scheme to use for the forward passes.

  • forward_pass_resampling_probability::Union{Nothing,Float64}: set to a value in (0, 1) to enable RiskAdjustedForwardPass. Defaults to nothing (disabled).

  • add_to_existing_cuts::Bool: set to true to allow training a model that was previously trained. Defaults to false.

  • duality_handler::AbstractDualityHandler: specify a duality handler to use when creating cuts.

  • post_iteration_callback::Function: a callback with the signature post_iteration_callback(::IterationResult) that is evaluated after each iteration of the algorithm.

There is also a special option for infinite horizon problems

  • cycle_discretization_delta: the maximum distance between states allowed on the forward pass. This is for advanced users only and needs to be used in conjunction with a different sampling_scheme.
source
SDDP.termination_statusFunction
termination_status(model::PolicyGraph)::Symbol

Query the reason why the training stopped.

source
SDDP.write_cuts_to_fileFunction
write_cuts_to_file(
     model::PolicyGraph{T},
     filename::String;
     kwargs...,
-) where {T}

Write the cuts that form the policy in model to filename in JSON format.

Keyword arguments

  • node_name_parser is a function which converts the name of each node into a string representation. It has the signature: node_name_parser(::T)::String.

  • write_only_selected_cuts write only the selected cuts to the json file. Defaults to false.

See also SDDP.read_cuts_from_file.

source
SDDP.read_cuts_from_fileFunction
read_cuts_from_file(
+) where {T}

Write the cuts that form the policy in model to filename in JSON format.

Keyword arguments

  • node_name_parser is a function which converts the name of each node into a string representation. It has the signature: node_name_parser(::T)::String.

  • write_only_selected_cuts write only the selected cuts to the json file. Defaults to false.

See also SDDP.read_cuts_from_file.

source
SDDP.read_cuts_from_fileFunction
read_cuts_from_file(
     model::PolicyGraph{T},
     filename::String;
     kwargs...,
-) where {T}

Read cuts (saved using SDDP.write_cuts_to_file) from filename into model.

Since T can be an arbitrary Julia type, the conversion to JSON is lossy. When reading, read_cuts_from_file only supports T=Int, T=NTuple{N, Int}, and T=Symbol. If you have manually created a policy graph with a different node type T, provide a function node_name_parser with the signature

Keyword arguments

  • node_name_parser(T, name::String)::T where {T} that returns the name of each node given the string name name. If node_name_parser returns nothing, those cuts are skipped.

  • cut_selection::Bool run or not the cut selection algorithm when adding the cuts to the model.

See also SDDP.write_cuts_to_file.

source
SDDP.write_log_to_csvFunction
write_log_to_csv(model::PolicyGraph, filename::String)

Write the log of the most recent training to a csv for post-analysis.

Assumes that the model has been trained via SDDP.train.

source
SDDP.set_numerical_difficulty_callbackFunction
set_numerical_difficulty_callback(
+) where {T}

Read cuts (saved using SDDP.write_cuts_to_file) from filename into model.

Since T can be an arbitrary Julia type, the conversion to JSON is lossy. When reading, read_cuts_from_file only supports T=Int, T=NTuple{N, Int}, and T=Symbol. If you have manually created a policy graph with a different node type T, provide a function node_name_parser with the signature

Keyword arguments

  • node_name_parser(T, name::String)::T where {T} that returns the name of each node given the string name name. If node_name_parser returns nothing, those cuts are skipped.

  • cut_selection::Bool run or not the cut selection algorithm when adding the cuts to the model.

See also SDDP.write_cuts_to_file.

source
SDDP.write_log_to_csvFunction
write_log_to_csv(model::PolicyGraph, filename::String)

Write the log of the most recent training to a csv for post-analysis.

Assumes that the model has been trained via SDDP.train.

source
SDDP.set_numerical_difficulty_callbackFunction
set_numerical_difficulty_callback(
     model::PolicyGraph,
     callback::Function,
 )

Set a callback function callback(::PolicyGraph, ::Node; require_dual::Bool) that is run when the optimizer terminates without finding a primal solution (and dual solution if require_dual is true).

Default callback

The default callback is a small variation of:

function callback(::PolicyGraph, node::Node; require_dual::Bool)
@@ -274,29 +274,29 @@
     end
     return
 end
-SDDP.set_numerical_difficulty_callback(model, callback)
source

Stopping rules

SDDP.AbstractStoppingRuleType
AbstractStoppingRule

The abstract type for the stopping-rule interface.

You need to define the following methods:

source
SDDP.stopping_rule_statusFunction
stopping_rule_status(::AbstractStoppingRule)::Symbol

Return a symbol describing the stopping rule.

source
SDDP.convergence_testFunction
convergence_test(
+SDDP.set_numerical_difficulty_callback(model, callback)
source

Stopping rules

SDDP.AbstractStoppingRuleType
AbstractStoppingRule

The abstract type for the stopping-rule interface.

You need to define the following methods:

source
SDDP.stopping_rule_statusFunction
stopping_rule_status(::AbstractStoppingRule)::Symbol

Return a symbol describing the stopping rule.

source
SDDP.convergence_testFunction
convergence_test(
     model::PolicyGraph,
     log::Vector{Log},
     ::AbstractStoppingRule,
-)::Bool

Return a Bool indicating if the algorithm should terminate the training.

source
SDDP.IterationLimitType
IterationLimit(limit::Int)

Teriminate the algorithm after limit number of iterations.

source
SDDP.TimeLimitType
TimeLimit(limit::Float64)

Teriminate the algorithm after limit seconds of computation.

source
SDDP.StatisticalType
Statistical(;
+)::Bool

Return a Bool indicating if the algorithm should terminate the training.

source
SDDP.IterationLimitType
IterationLimit(limit::Int)

Teriminate the algorithm after limit number of iterations.

source
SDDP.TimeLimitType
TimeLimit(limit::Float64)

Teriminate the algorithm after limit seconds of computation.

source
SDDP.StatisticalType
Statistical(;
     num_replications::Int,
     iteration_period::Int = 1,
     z_score::Float64 = 1.96,
     verbose::Bool = true,
     disable_warning::Bool = false,
-)

Perform an in-sample Monte Carlo simulation of the policy with num_replications replications every iteration_periods and terminate if the deterministic bound (lower if minimizing) falls into the confidence interval for the mean of the simulated cost.

If verbose = true, print the confidence interval.

If disable_warning = true, disable the warning telling you not to use this stopping rule (see below).

Why this stopping rule is not good

This stopping rule is one of the most common stopping rules seen in the literature. Don't follow the crowd. It is a poor choice for your model, and should be rarely used. Instead, you should use the default stopping rule, or use a fixed limit like a time or iteration limit.

To understand why this stopping rule is a bad idea, assume we have conducted num_replications simulations and the objectives are in a vector objectives::Vector{Float64}.

Our mean is μ = mean(objectives) and the half-width of the confidence interval is w = z_score * std(objectives) / sqrt(num_replications).

Many papers suggest terminating the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) is contained within the confidence interval. That is, if μ - w <= bound <= μ + w. Even worse, some papers define an optimization gap of (μ + w) / bound (if minimizing) or (μ - w) / bound (if maximizing), and they terminate once the gap is less than a value like 1%.

Both of these approaches are misleading, and more often than not, they will result in terminating with a sub-optimal policy that performs worse than expected. There are two main reasons for this:

  1. The half-width depends on the number of replications. To reduce the computational cost, users are often tempted to choose a small number of replications. This increases the half-width and makes it more likely that the algorithm will stop early. But if we choose a large number of replications, then the computational cost is high, and we would have been better off to run a fixed number of iterations and use that computational time to run extra training iterations.
  2. The confidence interval assumes that the simulated values are normally distributed. In infinite horizon models, this is almost never the case. The distribution is usually closer to exponential or log-normal.

There is a third, more technical reason which relates to the conditional dependence of constructing multiple confidence intervals.

The default value of z_score = 1.96 corresponds to a 95% confidence interval. You should interpret the interval as "if we re-run this simulation 100 times, then the true mean will lie in the confidence interval 95 times out of 100." But if the bound is within the confidence interval, then we know the true mean cannot be better than the bound. Therfore, there is a more than 95% chance that the mean is within the interval.

A separate problem arises if we simulate, find that the bound is outside the confidence interval, keep training, and then re-simulate to compute a new confidence interval. Because we will terminate when the bound enters the confidence interval, the repeated construction of a confidence interval means that the unconditional probability that we terminate with a false positive is larger than 5% (there are now more chances that the sample mean is optimistic and that the confidence interval includes the bound but not the true mean). One fix is to simulate with a sequentially increasing number of replicates, so that the unconditional probability stays at 95%, but this runs into the problem of computational cost. For more information on sequential sampling, see, for example, Güzin Bayraksan, David P. Morton, (2011) A Sequential Sampling Procedure for Stochastic Programming. Operations Research 59(4):898-913.

source
SDDP.BoundStallingType
BoundStalling(num_previous_iterations::Int, tolerance::Float64)

Teriminate the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) fails to improve by more than tolerance in absolute terms for more than num_previous_iterations consecutve iterations, provided it has improved relative to the bound after the first iteration.

Checking for an improvement relative to the first iteration avoids early termination in a situation where the bound fails to improve for the first N iterations. This frequently happens in models with a large number of stages, where it takes time for the cuts to propogate backward enough to modify the bound of the root node.

source
SDDP.StoppingChainType
StoppingChain(rules::AbstractStoppingRule...)

Terminate once all of the rules are statified.

This stopping rule short-circuits, so subsequent rules are only tested if the previous pass.

Examples

A stopping rule that runs 100 iterations, then checks for the bound stalling:

StoppingChain(IterationLimit(100), BoundStalling(5, 0.1))
source
SDDP.SimulationStoppingRuleType
SimulationStoppingRule(;
+)

Perform an in-sample Monte Carlo simulation of the policy with num_replications replications every iteration_periods and terminate if the deterministic bound (lower if minimizing) falls into the confidence interval for the mean of the simulated cost.

If verbose = true, print the confidence interval.

If disable_warning = true, disable the warning telling you not to use this stopping rule (see below).

Why this stopping rule is not good

This stopping rule is one of the most common stopping rules seen in the literature. Don't follow the crowd. It is a poor choice for your model, and should be rarely used. Instead, you should use the default stopping rule, or use a fixed limit like a time or iteration limit.

To understand why this stopping rule is a bad idea, assume we have conducted num_replications simulations and the objectives are in a vector objectives::Vector{Float64}.

Our mean is μ = mean(objectives) and the half-width of the confidence interval is w = z_score * std(objectives) / sqrt(num_replications).

Many papers suggest terminating the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) is contained within the confidence interval. That is, if μ - w <= bound <= μ + w. Even worse, some papers define an optimization gap of (μ + w) / bound (if minimizing) or (μ - w) / bound (if maximizing), and they terminate once the gap is less than a value like 1%.

Both of these approaches are misleading, and more often than not, they will result in terminating with a sub-optimal policy that performs worse than expected. There are two main reasons for this:

  1. The half-width depends on the number of replications. To reduce the computational cost, users are often tempted to choose a small number of replications. This increases the half-width and makes it more likely that the algorithm will stop early. But if we choose a large number of replications, then the computational cost is high, and we would have been better off to run a fixed number of iterations and use that computational time to run extra training iterations.
  2. The confidence interval assumes that the simulated values are normally distributed. In infinite horizon models, this is almost never the case. The distribution is usually closer to exponential or log-normal.

There is a third, more technical reason which relates to the conditional dependence of constructing multiple confidence intervals.

The default value of z_score = 1.96 corresponds to a 95% confidence interval. You should interpret the interval as "if we re-run this simulation 100 times, then the true mean will lie in the confidence interval 95 times out of 100." But if the bound is within the confidence interval, then we know the true mean cannot be better than the bound. Therfore, there is a more than 95% chance that the mean is within the interval.

A separate problem arises if we simulate, find that the bound is outside the confidence interval, keep training, and then re-simulate to compute a new confidence interval. Because we will terminate when the bound enters the confidence interval, the repeated construction of a confidence interval means that the unconditional probability that we terminate with a false positive is larger than 5% (there are now more chances that the sample mean is optimistic and that the confidence interval includes the bound but not the true mean). One fix is to simulate with a sequentially increasing number of replicates, so that the unconditional probability stays at 95%, but this runs into the problem of computational cost. For more information on sequential sampling, see, for example, Güzin Bayraksan, David P. Morton, (2011) A Sequential Sampling Procedure for Stochastic Programming. Operations Research 59(4):898-913.

source
SDDP.BoundStallingType
BoundStalling(num_previous_iterations::Int, tolerance::Float64)

Teriminate the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) fails to improve by more than tolerance in absolute terms for more than num_previous_iterations consecutve iterations, provided it has improved relative to the bound after the first iteration.

Checking for an improvement relative to the first iteration avoids early termination in a situation where the bound fails to improve for the first N iterations. This frequently happens in models with a large number of stages, where it takes time for the cuts to propogate backward enough to modify the bound of the root node.

source
SDDP.StoppingChainType
StoppingChain(rules::AbstractStoppingRule...)

Terminate once all of the rules are statified.

This stopping rule short-circuits, so subsequent rules are only tested if the previous pass.

Examples

A stopping rule that runs 100 iterations, then checks for the bound stalling:

StoppingChain(IterationLimit(100), BoundStalling(5, 0.1))
source
SDDP.SimulationStoppingRuleType
SimulationStoppingRule(;
     sampling_scheme::AbstractSamplingScheme = SDDP.InSampleMonteCarlo(),
     replications::Int = -1,
     period::Int = -1,
     distance_tol::Float64 = 1e-2,
     bound_tol::Float64 = 1e-4,
-)

Terminate the algorithm using a mix of heuristics. Unless you know otherwise, this is typically a good default.

Termination criteria

First, we check that the deterministic bound has stabilized. That is, over the last five iterations, the deterministic bound has changed by less than an absolute or relative tolerance of bound_tol.

Then, if we have not done one in the last period iterations, we perform a primal simulation of the policy using replications out-of-sample realizations from sampling_scheme. The realizations are stored and re-used in each simulation. From each simulation, we record the value of the stage objective. We terminate the policy if each of the trajectories in two consecutive simulations differ by less than distance_tol.

By default, replications and period are -1, and SDDP.jl will guess good values for these. Over-ride the default behavior by setting an appropriate value.

Example

SDDP.train(model; stopping_rules = [SimulationStoppingRule()])
source
SDDP.FirstStageStoppingRuleType
FirstStageStoppingRule(; atol::Float64 = 1e-3, iterations::Int = 50)

Terminate the algorithm when the outgoing values of the first-stage state variables have not changed by more than atol for iterations number of consecutive iterations.

Example

SDDP.train(model; stopping_rules = [FirstStageStoppingRule()])
source

Sampling schemes

SDDP.AbstractSamplingSchemeType
AbstractSamplingScheme

The abstract type for the sampling-scheme interface.

You need to define the following methods:

source
SDDP.sample_scenarioFunction
sample_scenario(graph::PolicyGraph{T}, ::AbstractSamplingScheme) where {T}

Sample a scenario from the policy graph graph based on the sampling scheme.

Returns ::Tuple{Vector{Tuple{T, <:Any}}, Bool}, where the first element is the scenario, and the second element is a Boolean flag indicating if the scenario was terminated due to the detection of a cycle.

The scenario is a list of tuples (type Vector{Tuple{T, <:Any}}) where the first component of each tuple is the index of the node, and the second component is the stagewise-independent noise term observed in that node.

source
SDDP.InSampleMonteCarloType
InSampleMonteCarlo(;
+)

Terminate the algorithm using a mix of heuristics. Unless you know otherwise, this is typically a good default.

Termination criteria

First, we check that the deterministic bound has stabilized. That is, over the last five iterations, the deterministic bound has changed by less than an absolute or relative tolerance of bound_tol.

Then, if we have not done one in the last period iterations, we perform a primal simulation of the policy using replications out-of-sample realizations from sampling_scheme. The realizations are stored and re-used in each simulation. From each simulation, we record the value of the stage objective. We terminate the policy if each of the trajectories in two consecutive simulations differ by less than distance_tol.

By default, replications and period are -1, and SDDP.jl will guess good values for these. Over-ride the default behavior by setting an appropriate value.

Example

SDDP.train(model; stopping_rules = [SimulationStoppingRule()])
source
SDDP.FirstStageStoppingRuleType
FirstStageStoppingRule(; atol::Float64 = 1e-3, iterations::Int = 50)

Terminate the algorithm when the outgoing values of the first-stage state variables have not changed by more than atol for iterations number of consecutive iterations.

Example

SDDP.train(model; stopping_rules = [FirstStageStoppingRule()])
source

Sampling schemes

SDDP.AbstractSamplingSchemeType
AbstractSamplingScheme

The abstract type for the sampling-scheme interface.

You need to define the following methods:

source
SDDP.sample_scenarioFunction
sample_scenario(graph::PolicyGraph{T}, ::AbstractSamplingScheme) where {T}

Sample a scenario from the policy graph graph based on the sampling scheme.

Returns ::Tuple{Vector{Tuple{T, <:Any}}, Bool}, where the first element is the scenario, and the second element is a Boolean flag indicating if the scenario was terminated due to the detection of a cycle.

The scenario is a list of tuples (type Vector{Tuple{T, <:Any}}) where the first component of each tuple is the index of the node, and the second component is the stagewise-independent noise term observed in that node.

source
SDDP.InSampleMonteCarloType
InSampleMonteCarlo(;
     max_depth::Int = 0,
     terminate_on_cycle::Function = false,
     terminate_on_dummy_leaf::Function = true,
     rollout_limit::Function = (i::Int) -> typemax(Int),
     initial_node::Any = nothing,
-)

A Monte Carlo sampling scheme using the in-sample data from the policy graph definition.

If terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.

Note that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.

Control which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.

You can use rollout_limit to set iteration specific depth limits. For example:

InSampleMonteCarlo(rollout_limit = i -> 2 * i)
source
SDDP.OutOfSampleMonteCarloType
OutOfSampleMonteCarlo(
+)

A Monte Carlo sampling scheme using the in-sample data from the policy graph definition.

If terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.

Note that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.

Control which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.

You can use rollout_limit to set iteration specific depth limits. For example:

InSampleMonteCarlo(rollout_limit = i -> 2 * i)
source
SDDP.OutOfSampleMonteCarloType
OutOfSampleMonteCarlo(
     f::Function,
     graph::PolicyGraph;
     use_insample_transition::Bool = false,
@@ -315,7 +315,7 @@
     end
 end

Given linear policy graph graph with T stages:

sampler = OutOfSampleMonteCarlo(graph, use_insample_transition=true) do node
     return [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]
-end
source
SDDP.HistoricalType
Historical(
+end
source
SDDP.HistoricalType
Historical(
     scenarios::Vector{Vector{Tuple{T,S}}},
     probability::Vector{Float64};
     terminate_on_cycle::Bool = false,
@@ -326,17 +326,17 @@
         [(1, 1.0), (2, 0.0), (3, 0.0)]
     ],
     [0.2, 0.5, 0.3],
-)
source
Historical(
+)
source
Historical(
     scenarios::Vector{Vector{Tuple{T,S}}};
     terminate_on_cycle::Bool = false,
 ) where {T,S}

A deterministic sampling scheme that iterates through the vector of provided scenarios.

Examples

Historical([
     [(1, 0.5), (2, 1.0), (3, 0.5)],
     [(1, 0.5), (2, 0.0), (3, 1.0)],
     [(1, 1.0), (2, 0.0), (3, 0.0)],
-])
source
Historical(
+])
source
Historical(
     scenario::Vector{Tuple{T,S}};
     terminate_on_cycle::Bool = false,
-) where {T,S}

A deterministic sampling scheme that always samples scenario.

Examples

Historical([(1, 0.5), (2, 1.5), (3, 0.75)])
source
SDDP.PSRSamplingSchemeType
PSRSamplingScheme(N::Int; sampling_scheme = InSampleMonteCarlo())

A sampling scheme with N scenarios, similar to how PSR does it.

source
SDDP.SimulatorSamplingSchemeType
SimulatorSamplingScheme(simulator::Function)

Create a sampling scheme based on a univariate scenario generator simulator, which returns a Vector{Float64} when called with no arguments like simulator().

This sampling scheme must be used with a Markovian graph constructed from the same simulator.

The sample space for SDDP.parameterize must be a tuple with 1 or 2 values, value is the Markov state and the second value is the random variable for the current node. If the node is deterministic, use Ω = [(markov_state,)].

This sampling scheme generates a new scenario by calling simulator(), and then picking the sequence of nodes in the Markovian graph that is closest to the new trajectory.

Example

julia> using SDDP
+) where {T,S}

A deterministic sampling scheme that always samples scenario.

Examples

Historical([(1, 0.5), (2, 1.5), (3, 0.75)])
source
SDDP.PSRSamplingSchemeType
PSRSamplingScheme(N::Int; sampling_scheme = InSampleMonteCarlo())

A sampling scheme with N scenarios, similar to how PSR does it.

source
SDDP.SimulatorSamplingSchemeType
SimulatorSamplingScheme(simulator::Function)

Create a sampling scheme based on a univariate scenario generator simulator, which returns a Vector{Float64} when called with no arguments like simulator().

This sampling scheme must be used with a Markovian graph constructed from the same simulator.

The sample space for SDDP.parameterize must be a tuple with 1 or 2 values, value is the Markov state and the second value is the random variable for the current node. If the node is deterministic, use Ω = [(markov_state,)].

This sampling scheme generates a new scenario by calling simulator(), and then picking the sequence of nodes in the Markovian graph that is closest to the new trajectory.

Example

julia> using SDDP
 
 julia> import HiGHS
 
@@ -368,50 +368,50 @@
            iteration_limit = 10,
            sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),
        )
-
source

Parallel schemes

SDDP.AbstractParallelSchemeType
AbstractParallelScheme

Abstract type for different parallelism schemes.

source
SDDP.SerialType
Serial()

Run SDDP in serial mode.

source
SDDP.ThreadedType
Threaded()

Run SDDP in multi-threaded mode.

Use julia --threads N to start Julia with N threads. In most cases, you should pick N to be the number of physical cores on your machine.

Danger

This plug-in is experimental, and parts of SDDP.jl may not be threadsafe. If you encounter any problems or crashes, please open a GitHub issue.

Example

SDDP.train(model; parallel_scheme = SDDP.Threaded())
-SDDP.simulate(model; parallel_scheme = SDDP.Threaded())
source
SDDP.AsynchronousType
Asynchronous(
+
source

Parallel schemes

SDDP.AbstractParallelSchemeType
AbstractParallelScheme

Abstract type for different parallelism schemes.

source
SDDP.SerialType
Serial()

Run SDDP in serial mode.

source
SDDP.ThreadedType
Threaded()

Run SDDP in multi-threaded mode.

Use julia --threads N to start Julia with N threads. In most cases, you should pick N to be the number of physical cores on your machine.

Danger

This plug-in is experimental, and parts of SDDP.jl may not be threadsafe. If you encounter any problems or crashes, please open a GitHub issue.

Example

SDDP.train(model; parallel_scheme = SDDP.Threaded())
+SDDP.simulate(model; parallel_scheme = SDDP.Threaded())
source
SDDP.AsynchronousType
Asynchronous(
     [init_callback::Function,]
     slave_pids::Vector{Int} = workers();
     use_master::Bool = true,
-)

Run SDDP in asynchronous mode workers with pid's slave_pids.

After initializing the models on each worker, call init_callback(model). Note that init_callback is run locally on the worker and not on the master thread.

If use_master is true, iterations are also conducted on the master process.

source
Asynchronous(
+)

Run SDDP in asynchronous mode workers with pid's slave_pids.

After initializing the models on each worker, call init_callback(model). Note that init_callback is run locally on the worker and not on the master thread.

If use_master is true, iterations are also conducted on the master process.

source
Asynchronous(
     solver::Any,
     slave_pids::Vector{Int} = workers();
     use_master::Bool = true,
-)

Run SDDP in asynchronous mode workers with pid's slave_pids.

Set the optimizer on each worker by calling JuMP.set_optimizer(model, solver).

source

Forward passes

SDDP.AbstractForwardPassType
AbstractForwardPass

Abstract type for different forward passes.

source
SDDP.DefaultForwardPassType
DefaultForwardPass(; include_last_node::Bool = true)

The default forward pass.

If include_last_node = false and the sample terminated due to a cycle, then the last node (which forms the cycle) is omitted. This can be useful option to set when training, but it comes at the cost of not knowing which node formed the cycle (if there are multiple possibilities).

source
SDDP.RevisitingForwardPassType
RevisitingForwardPass(
+)

Run SDDP in asynchronous mode workers with pid's slave_pids.

Set the optimizer on each worker by calling JuMP.set_optimizer(model, solver).

source

Forward passes

SDDP.AbstractForwardPassType
AbstractForwardPass

Abstract type for different forward passes.

source
SDDP.DefaultForwardPassType
DefaultForwardPass(; include_last_node::Bool = true)

The default forward pass.

If include_last_node = false and the sample terminated due to a cycle, then the last node (which forms the cycle) is omitted. This can be useful option to set when training, but it comes at the cost of not knowing which node formed the cycle (if there are multiple possibilities).

source
SDDP.RevisitingForwardPassType
RevisitingForwardPass(
     period::Int = 500;
     sub_pass::AbstractForwardPass = DefaultForwardPass(),
-)

A forward pass scheme that generate period new forward passes (using sub_pass), then revisits all previously explored forward passes. This can be useful to encourage convergence at a diversity of points in the state-space.

Set period = typemax(Int) to disable.

For example, if period = 2, then the forward passes will be revisited as follows: 1, 2, 1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 1, 2, ....

source
SDDP.RiskAdjustedForwardPassType
RiskAdjustedForwardPass(;
+)

A forward pass scheme that generate period new forward passes (using sub_pass), then revisits all previously explored forward passes. This can be useful to encourage convergence at a diversity of points in the state-space.

Set period = typemax(Int) to disable.

For example, if period = 2, then the forward passes will be revisited as follows: 1, 2, 1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 1, 2, ....

source
SDDP.RiskAdjustedForwardPassType
RiskAdjustedForwardPass(;
     forward_pass::AbstractForwardPass,
     risk_measure::AbstractRiskMeasure,
     resampling_probability::Float64,
     rejection_count::Int = 5,
-)

A forward pass that resamples a previous forward pass with resampling_probability probability, and otherwise samples a new forward pass using forward_pass.

The forward pass to revisit is chosen based on the risk-adjusted (using risk_measure) probability of the cumulative stage objectives.

Note that this objective corresponds to the first time we visited the trajectory. Subsequent visits may have improved things, but we don't have the mechanisms in-place to update it. Therefore, remove the forward pass from resampling consideration after rejection_count revisits.

source
SDDP.AlternativeForwardPassType
AlternativeForwardPass(
+)

A forward pass that resamples a previous forward pass with resampling_probability probability, and otherwise samples a new forward pass using forward_pass.

The forward pass to revisit is chosen based on the risk-adjusted (using risk_measure) probability of the cumulative stage objectives.

Note that this objective corresponds to the first time we visited the trajectory. Subsequent visits may have improved things, but we don't have the mechanisms in-place to update it. Therefore, remove the forward pass from resampling consideration after rejection_count revisits.

source
SDDP.AlternativeForwardPassType
AlternativeForwardPass(
     forward_model::SDDP.PolicyGraph{T};
     forward_pass::AbstractForwardPass = DefaultForwardPass(),
-)

A forward pass that simulates using forward_model, which may be different to the model used in the backwards pass.

When using this forward pass, you should almost always pass SDDP.AlternativePostIterationCallback to the post_iteration_callback argument of SDDP.train.

This forward pass is most useful when the forward_model is non-convex and we use a convex approximation of the model in the backward pass.

For example, in optimal power flow models, we can use an AC-OPF formulation as the forward_model and a DC-OPF formulation as the backward model.

For more details see the paper:

Rosemberg, A., and Street, A., and Garcia, J.D., and Valladão, D.M., and Silva, T., and Dowson, O. (2021). Assessing the cost of network simplifications in long-term hydrothermal dispatch planning models. IEEE Transactions on Sustainable Energy. 13(1), 196-206.

source
SDDP.AlternativePostIterationCallbackType
AlternativePostIterationCallback(forward_model::PolicyGraph)

A post-iteration callback that should be used whenever SDDP.AlternativeForwardPass is used.

source
SDDP.RegularizedForwardPassType
RegularizedForwardPass(;
+)

A forward pass that simulates using forward_model, which may be different to the model used in the backwards pass.

When using this forward pass, you should almost always pass SDDP.AlternativePostIterationCallback to the post_iteration_callback argument of SDDP.train.

This forward pass is most useful when the forward_model is non-convex and we use a convex approximation of the model in the backward pass.

For example, in optimal power flow models, we can use an AC-OPF formulation as the forward_model and a DC-OPF formulation as the backward model.

For more details see the paper:

Rosemberg, A., and Street, A., and Garcia, J.D., and Valladão, D.M., and Silva, T., and Dowson, O. (2021). Assessing the cost of network simplifications in long-term hydrothermal dispatch planning models. IEEE Transactions on Sustainable Energy. 13(1), 196-206.

source
SDDP.AlternativePostIterationCallbackType
AlternativePostIterationCallback(forward_model::PolicyGraph)

A post-iteration callback that should be used whenever SDDP.AlternativeForwardPass is used.

source
SDDP.RegularizedForwardPassType
RegularizedForwardPass(;
     rho::Float64 = 0.05,
     forward_pass::AbstractForwardPass = DefaultForwardPass(),
-)

A forward pass that regularizes the outgoing first-stage state variables with an L-infty trust-region constraint about the previous iteration's solution. Specifically, the bounds of the outgoing state variable x are updated from (l, u) to max(l, x^k - rho * (u - l)) <= x <= min(u, x^k + rho * (u - l)), where x^k is the optimal solution of x in the previous iteration. On the first iteration, the value of the state at the root node is used.

By default, rho is set to 5%, which seems to work well empirically.

Pass a different forward_pass to control the forward pass within the regularized forward pass.

This forward pass is largely intended to be used for investment problems in which the first stage makes a series of capacity decisions that then influence the rest of the graph. An error is thrown if the first stage problem is not deterministic, and states are silently skipped if they do not have finite bounds.

source

Risk Measures

SDDP.AbstractRiskMeasureType
AbstractRiskMeasure

The abstract type for the risk measure interface.

You need to define the following methods:

source
SDDP.adjust_probabilityFunction
adjust_probability(
+)

A forward pass that regularizes the outgoing first-stage state variables with an L-infty trust-region constraint about the previous iteration's solution. Specifically, the bounds of the outgoing state variable x are updated from (l, u) to max(l, x^k - rho * (u - l)) <= x <= min(u, x^k + rho * (u - l)), where x^k is the optimal solution of x in the previous iteration. On the first iteration, the value of the state at the root node is used.

By default, rho is set to 5%, which seems to work well empirically.

Pass a different forward_pass to control the forward pass within the regularized forward pass.

This forward pass is largely intended to be used for investment problems in which the first stage makes a series of capacity decisions that then influence the rest of the graph. An error is thrown if the first stage problem is not deterministic, and states are silently skipped if they do not have finite bounds.

source

Risk Measures

SDDP.AbstractRiskMeasureType
AbstractRiskMeasure

The abstract type for the risk measure interface.

You need to define the following methods:

source
SDDP.adjust_probabilityFunction
adjust_probability(
     measure::Expectation
     risk_adjusted_probability::Vector{Float64},
     original_probability::Vector{Float64},
     noise_support::Vector{Noise{T}},
     objective_realizations::Vector{Float64},
     is_minimization::Bool,
-) where {T}
source

Duality handlers

SDDP.AbstractDualityHandlerType
AbstractDualityHandler

The abstract type for the duality handler interface.

source
SDDP.ContinuousConicDualityType
ContinuousConicDuality()

Compute dual variables in the backward pass using conic duality, relaxing any binary or integer restrictions as necessary.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
+) where {T}
source

Duality handlers

SDDP.AbstractDualityHandlerType
AbstractDualityHandler

The abstract type for the duality handler interface.

source
SDDP.ContinuousConicDualityType
ContinuousConicDuality()

Compute dual variables in the backward pass using conic duality, relaxing any binary or integer restrictions as necessary.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
  st (x̄, x′, u) in Xᵢ(w) ∩ S
     x̄ - x == 0          [λ]

where S ⊆ ℝ×ℤ, we relax integrality and using conic duality to solve for λ in the problem:

min Cᵢ(x̄, u, w) + θᵢ
  st (x̄, x′, u) in Xᵢ(w)
-    x̄ - x == 0          [λ]
source
SDDP.LagrangianDualityType
LagrangianDuality(;
+    x̄ - x == 0          [λ]
source
SDDP.LagrangianDualityType
LagrangianDuality(;
     method::LocalImprovementSearch.AbstractSearchMethod =
         LocalImprovementSearch.BFGS(100),
 )

Obtain dual variables in the backward pass using Lagrangian duality.

Arguments

  • method: the LocalImprovementSearch method for maximizing the Lagrangian dual problem.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
  st (x̄, x′, u) in Xᵢ(w) ∩ S
     x̄ - x == 0          [λ]

where S ⊆ ℝ×ℤ, we solve the problem max L(λ), where:

L(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' h(x̄)
-        st (x̄, x′, u) in Xᵢ(w) ∩ S

and where h(x̄) = x̄ - x.

source
SDDP.StrengthenedConicDualityType
StrengthenedConicDuality()

Obtain dual variables in the backward pass using strengthened conic duality.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
+        st (x̄, x′, u) in Xᵢ(w) ∩ S

and where h(x̄) = x̄ - x.

source
SDDP.StrengthenedConicDualityType
StrengthenedConicDuality()

Obtain dual variables in the backward pass using strengthened conic duality.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
  st (x̄, x′, u) in Xᵢ(w) ∩ S
     x̄ - x == 0          [λ]

we first obtain an estimate for λ using ContinuousConicDuality.

Then, we evaluate the Lagrangian function:

L(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' (x̄ - x`)
-        st (x̄, x′, u) in Xᵢ(w) ∩ S

to obtain a better estimate of the intercept.

source
SDDP.BanditDualityType
BanditDuality()

Formulates the problem of choosing a duality handler as a multi-armed bandit problem. The arms to choose between are:

Our problem isn't a typical multi-armed bandit for a two reasons:

  1. The reward distribution is non-stationary (each arm converges to 0 as it keeps getting pulled.
  2. The distribution of rewards is dependent on the history of the arms that were chosen.

We choose a very simple heuristic: pick the arm with the best mean + 1 standard deviation. That should ensure we consistently pick the arm with the best likelihood of improving the value function.

In future, we should consider discounting the rewards of earlier iterations, and focus more on the more-recent rewards.

source

Simulating the policy

SDDP.simulateFunction
simulate(
+        st (x̄, x′, u) in Xᵢ(w) ∩ S

to obtain a better estimate of the intercept.

source
SDDP.BanditDualityType
BanditDuality()

Formulates the problem of choosing a duality handler as a multi-armed bandit problem. The arms to choose between are:

Our problem isn't a typical multi-armed bandit for a two reasons:

  1. The reward distribution is non-stationary (each arm converges to 0 as it keeps getting pulled.
  2. The distribution of rewards is dependent on the history of the arms that were chosen.

We choose a very simple heuristic: pick the arm with the best mean + 1 standard deviation. That should ensure we consistently pick the arm with the best likelihood of improving the value function.

In future, we should consider discounting the rewards of earlier iterations, and focus more on the more-recent rewards.

source

Simulating the policy

SDDP.simulateFunction
simulate(
     model::PolicyGraph,
     number_replications::Int = 1,
     variables::Vector{Symbol} = Symbol[];
@@ -426,65 +426,65 @@
     custom_recorders = Dict{Symbol, Function}(
         :constraint_dual => sp -> JuMP.dual(sp[:my_constraint])
     )
-)

The value of the dual in the first stage of the second replication can be accessed as:

simulation_results[2][1][:constraint_dual]
source
SDDP.calculate_boundFunction
SDDP.calculate_bound(
+)

The value of the dual in the first stage of the second replication can be accessed as:

simulation_results[2][1][:constraint_dual]
source
SDDP.calculate_boundFunction
SDDP.calculate_bound(
     model::PolicyGraph,
     state::Dict{Symbol,Float64} = model.initial_root_state;
     risk_measure::AbstractRiskMeasure = Expectation(),
-)

Calculate the lower bound (if minimizing, otherwise upper bound) of the problem model at the point state, assuming the risk measure at the root node is risk_measure.

source
SDDP.add_all_cutsFunction
add_all_cuts(model::PolicyGraph)

Add all cuts that may have been deleted back into the model.

Explanation

During the solve, SDDP.jl may decide to remove cuts for a variety of reasons.

These can include cuts that define the optimal value function, particularly around the extremes of the state-space (e.g., reservoirs empty).

This function ensures that all cuts discovered are added back into the model.

You should call this after train and before simulate.

source

Decision rules

SDDP.DecisionRuleType
DecisionRule(model::PolicyGraph{T}; node::T)

Create a decision rule for node node in model.

Example

rule = SDDP.DecisionRule(model; node = 1)
source
SDDP.evaluateFunction
evaluate(
+)

Calculate the lower bound (if minimizing, otherwise upper bound) of the problem model at the point state, assuming the risk measure at the root node is risk_measure.

source
SDDP.add_all_cutsFunction
add_all_cuts(model::PolicyGraph)

Add all cuts that may have been deleted back into the model.

Explanation

During the solve, SDDP.jl may decide to remove cuts for a variety of reasons.

These can include cuts that define the optimal value function, particularly around the extremes of the state-space (e.g., reservoirs empty).

This function ensures that all cuts discovered are added back into the model.

You should call this after train and before simulate.

source

Decision rules

SDDP.DecisionRuleType
DecisionRule(model::PolicyGraph{T}; node::T)

Create a decision rule for node node in model.

Example

rule = SDDP.DecisionRule(model; node = 1)
source
SDDP.evaluateFunction
evaluate(
     rule::DecisionRule;
     incoming_state::Dict{Symbol,Float64},
     noise = nothing,
     controls_to_record = Symbol[],
-)

Evalute the decision rule rule at the point described by the incoming_state and noise.

If the node is deterministic, omit the noise argument.

Pass a list of symbols to controls_to_record to save the optimal primal solution corresponding to the names registered in the model.

source
evaluate(
+)

Evalute the decision rule rule at the point described by the incoming_state and noise.

If the node is deterministic, omit the noise argument.

Pass a list of symbols to controls_to_record to save the optimal primal solution corresponding to the names registered in the model.

source
evaluate(
     V::ValueFunction,
     point::Dict{Union{Symbol,String},<:Real}
     objective_state = nothing,
     belief_state = nothing
-)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
evalute(V::ValueFunction{Nothing, Nothing}; kwargs...)

Evalute the value function V at the point in the state-space specified by kwargs.

Examples

evaluate(V; volume = 1)
source
evaluate(
+)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
evalute(V::ValueFunction{Nothing, Nothing}; kwargs...)

Evalute the value function V at the point in the state-space specified by kwargs.

Examples

evaluate(V; volume = 1)
source
evaluate(
     model::PolicyGraph{T},
     validation_scenarios::ValidationScenarios{T,S},
 ) where {T,S}

Evaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
 train(model; iteration_limit = 100)
-simulations = evaluate(model, validation_scenarios)
source

Visualizing the policy

SDDP.SpaghettiPlotType
SDDP.SpaghettiPlot(; stages, scenarios)

Initialize a new SpaghettiPlot with stages stages and scenarios number of replications.

source
SDDP.add_spaghettiFunction
SDDP.add_spaghetti(data_function::Function, plt::SpaghettiPlot; kwargs...)

Description

Add a new figure to the SpaghettiPlot plt, where the y-value of the scenarioth line when x = stage is given by data_function(plt.simulations[scenario][stage]).

Keyword arguments

  • xlabel: set the xaxis label
  • ylabel: set the yaxis label
  • title: set the title of the plot
  • ymin: set the minimum y value
  • ymax: set the maximum y value
  • cumulative: plot the additive accumulation of the value across the stages
  • interpolate: interpolation method for lines between stages.

Defaults to "linear" see the d3 docs for all options.

Examples

simulations = simulate(model, 10)
+simulations = evaluate(model, validation_scenarios)
source

Visualizing the policy

SDDP.SpaghettiPlotType
SDDP.SpaghettiPlot(; stages, scenarios)

Initialize a new SpaghettiPlot with stages stages and scenarios number of replications.

source
SDDP.add_spaghettiFunction
SDDP.add_spaghetti(data_function::Function, plt::SpaghettiPlot; kwargs...)

Description

Add a new figure to the SpaghettiPlot plt, where the y-value of the scenarioth line when x = stage is given by data_function(plt.simulations[scenario][stage]).

Keyword arguments

  • xlabel: set the xaxis label
  • ylabel: set the yaxis label
  • title: set the title of the plot
  • ymin: set the minimum y value
  • ymax: set the maximum y value
  • cumulative: plot the additive accumulation of the value across the stages
  • interpolate: interpolation method for lines between stages.

Defaults to "linear" see the d3 docs for all options.

Examples

simulations = simulate(model, 10)
 plt = SDDP.spaghetti_plot(simulations)
 SDDP.add_spaghetti(plt; title = "Stage objective") do data
     return data[:stage_objective]
-end
source
SDDP.publication_plotFunction
SDDP.publication_plot(
+end
source
SDDP.publication_plotFunction
SDDP.publication_plot(
     data_function, simulations;
     quantile = [0.0, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0],
     kwargs...)

Create a Plots.jl recipe plot of the simulations.

See Plots.jl for the list of keyword arguments.

Examples

SDDP.publication_plot(simulations; title = "My title") do data
     return data[:stage_objective]
-end
source
SDDP.ValueFunctionType
ValueFunction

A representation of the value function. SDDP.jl uses the following unique representation of the value function that is undocumented in the literature.

It supports three types of state variables:

  1. x - convex "resource" states
  2. b - concave "belief" states
  3. y - concave "objective" states

In addition, we have three types of cuts:

  1. Single-cuts (also called "average" cuts in the literature), which involve the risk-adjusted expectation of the cost-to-go.
  2. Multi-cuts, which use a different cost-to-go term for each realization w.
  3. Risk-cuts, which correspond to the facets of the dual interpretation of a coherent risk measure.

Therefore, ValueFunction returns a JuMP model of the following form:

V(x, b, y) = min: μᵀb + νᵀy + θ
+end
source
SDDP.ValueFunctionType
ValueFunction

A representation of the value function. SDDP.jl uses the following unique representation of the value function that is undocumented in the literature.

It supports three types of state variables:

  1. x - convex "resource" states
  2. b - concave "belief" states
  3. y - concave "objective" states

In addition, we have three types of cuts:

  1. Single-cuts (also called "average" cuts in the literature), which involve the risk-adjusted expectation of the cost-to-go.
  2. Multi-cuts, which use a different cost-to-go term for each realization w.
  3. Risk-cuts, which correspond to the facets of the dual interpretation of a coherent risk measure.

Therefore, ValueFunction returns a JuMP model of the following form:

V(x, b, y) = min: μᵀb + νᵀy + θ
              s.t. # "Single" / "Average" cuts
                   μᵀb(j) + νᵀy(j) + θ >= α(j) + xᵀβ(j), ∀ j ∈ J
                   # "Multi" cuts
                   μᵀb(k) + νᵀy(k) + φ(w) >= α(k, w) + xᵀβ(k, w), ∀w ∈ Ω, k ∈ K
                   # "Risk-set" cuts
-                  θ ≥ Σ{p(k, w) * φ(w)}_w - μᵀb(k) - νᵀy(k), ∀ k ∈ K
source
SDDP.evaluateMethod
evaluate(
+                  θ ≥ Σ{p(k, w) * φ(w)}_w - μᵀb(k) - νᵀy(k), ∀ k ∈ K
source
SDDP.evaluateMethod
evaluate(
     V::ValueFunction,
     point::Dict{Union{Symbol,String},<:Real}
     objective_state = nothing,
     belief_state = nothing
-)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
SDDP.plotFunction
plot(plt::SpaghettiPlot[, filename::String]; open::Bool = true)

The SpaghettiPlot plot plt to filename. If filename is not given, it will be saved to a temporary directory. If open = true, then a browser window will be opened to display the resulting HTML file.

source

Debugging the model

SDDP.write_subproblem_to_fileFunction
write_subproblem_to_file(
+)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
SDDP.plotFunction
plot(plt::SpaghettiPlot[, filename::String]; open::Bool = true)

The SpaghettiPlot plot plt to filename. If filename is not given, it will be saved to a temporary directory. If open = true, then a browser window will be opened to display the resulting HTML file.

source

Debugging the model

SDDP.write_subproblem_to_fileFunction
write_subproblem_to_file(
     node::Node,
     filename::String;
     throw_error::Bool = false,
-)

Write the subproblem contained in node to the file filename.

The throw_error is an argument used internally by SDDP.jl. If set, an error will be thrown.

Example

SDDP.write_subproblem_to_file(model[1], "subproblem_1.lp")
source
SDDP.deterministic_equivalentFunction
deterministic_equivalent(
+)

Write the subproblem contained in node to the file filename.

The throw_error is an argument used internally by SDDP.jl. If set, an error will be thrown.

Example

SDDP.write_subproblem_to_file(model[1], "subproblem_1.lp")
source
SDDP.deterministic_equivalentFunction
deterministic_equivalent(
     pg::PolicyGraph{T},
     optimizer = nothing;
     time_limit::Union{Real,Nothing} = 60.0,
-)

Form a JuMP model that represents the deterministic equivalent of the problem.

Examples

deterministic_equivalent(model)
deterministic_equivalent(model, HiGHS.Optimizer)
source

StochOptFormat

SDDP.write_to_fileFunction
write_to_file(
+)

Form a JuMP model that represents the deterministic equivalent of the problem.

Examples

deterministic_equivalent(model)
deterministic_equivalent(model, HiGHS.Optimizer)
source

StochOptFormat

SDDP.write_to_fileFunction
write_to_file(
     model::PolicyGraph,
     filename::String;
     compression::MOI.FileFormats.AbstractCompressionScheme =
         MOI.FileFormats.AutomaticCompression(),
     kwargs...
-)

Write model to filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.write(::IO, ::PolicyGraph) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.write(::IO, ::PolicyGraph).

Examples

write_to_file(model, "my_model.sof.json"; validation_scenarios = 10)
source
SDDP.read_from_fileFunction
read_from_file(
+)

Write model to filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.write(::IO, ::PolicyGraph) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.write(::IO, ::PolicyGraph).

Examples

write_to_file(model, "my_model.sof.json"; validation_scenarios = 10)
source
SDDP.read_from_fileFunction
read_from_file(
     filename::String;
     compression::MOI.FileFormats.AbstractCompressionScheme =
         MOI.FileFormats.AutomaticCompression(),
     kwargs...
-)::Tuple{PolicyGraph, ValidationScenarios}

Return a tuple containing a PolicyGraph object and a ValidationScenarios read from filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.read(::IO, ::Type{PolicyGraph}) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.read(::IO, ::Type{PolicyGraph}).

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
source
Base.writeMethod
Base.write(
+)::Tuple{PolicyGraph, ValidationScenarios}

Return a tuple containing a PolicyGraph object and a ValidationScenarios read from filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.read(::IO, ::Type{PolicyGraph}) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.read(::IO, ::Type{PolicyGraph}).

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
source
Base.writeMethod
Base.write(
     io::IO,
     model::PolicyGraph;
     validation_scenarios::Union{Nothing,Int,ValidationScenarios} = nothing,
@@ -500,15 +500,15 @@
         date = "2020-07-20",
         description = "Example problem for the SDDP.jl documentation",
     )
-end
source
Base.readMethod
Base.read(
+end
source
Base.readMethod
Base.read(
     io::IO,
     ::Type{PolicyGraph};
     bound::Float64 = 1e6,
 )::Tuple{PolicyGraph,ValidationScenarios}

Return a tuple containing a PolicyGraph object and a ValidationScenarios read from io in the StochOptFormat file format.

See also: evaluate.

Compatibility

Warning

This function is experimental. Things may change between commits. You should not rely on this functionality as a long-term file format (yet).

In addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:

  • Additive random variables in the constraints or in the objective
  • Multiplicative random variables in the objective

If your model uses something other than this, this function may throw an error or silently build a non-convex model.

Examples

open("my_model.sof.json", "r") do io
     model, validation_scenarios = read(io, PolicyGraph)
-end
source
SDDP.evaluateMethod
evaluate(
+end
source
SDDP.evaluateMethod
evaluate(
     model::PolicyGraph{T},
     validation_scenarios::ValidationScenarios{T,S},
 ) where {T,S}

Evaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
 train(model; iteration_limit = 100)
-simulations = evaluate(model, validation_scenarios)
source
SDDP.ValidationScenariosType
ValidationScenario{T,S}(scenarios::Vector{ValidationScenario{T,S}})

An AbstractSamplingScheme based on a vector of scenarios.

Each scenario is a vector of Tuple{T, S} where the first element is the node to visit and the second element is the realization of the stagewise-independent noise term. Pass nothing if the node is deterministic.

source
SDDP.ValidationScenarioType
ValidationScenario{T,S}(scenario::Vector{Tuple{T,S}})

A single scenario for testing.

See also: ValidationScenarios.

source
+simulations = evaluate(model, validation_scenarios)source
SDDP.ValidationScenariosType
ValidationScenario{T,S}(scenarios::Vector{ValidationScenario{T,S}})

An AbstractSamplingScheme based on a vector of scenarios.

Each scenario is a vector of Tuple{T, S} where the first element is the node to visit and the second element is the realization of the stagewise-independent noise term. Pass nothing if the node is deterministic.

source
SDDP.ValidationScenarioType
ValidationScenario{T,S}(scenario::Vector{Tuple{T,S}})

A single scenario for testing.

See also: ValidationScenarios.

source
diff --git a/previews/PR810/changelog/index.html b/previews/PR810/changelog/index.html index 97f487d8f..f08b44e55 100644 --- a/previews/PR810/changelog/index.html +++ b/previews/PR810/changelog/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Release notes

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

v1.10.1 (November 28, 2024)

Fixed

Other

  • Documentation updates (#801)

v1.10.0 (November 19, 2024)

Added

  • Added root_node_risk_measure keyword to train (#804)

Fixed

  • Fixed a bug with cut sharing in a graph with zero-probability arcs (#797)

Other

v1.9.0 (October 17, 2024)

Added

Fixed

  • Fixed the tests to skip threading tests if running in serial (#770)
  • Fixed BanditDuality to handle the case where the standard deviation is NaN (#779)
  • Fixed an error when lagged state variables are encountered in MSPFormat (#786)
  • Fixed publication_plot with replications of different lengths (#788)
  • Fixed CTRL+C interrupting the code at unsafe points (#789)

Other

  • Documentation improvements (#771) (#772)
  • Updated printing because of changes in JuMP (#773)

v1.8.1 (August 5, 2024)

Fixed

  • Fixed various issues with SDDP.Threaded() (#761)
  • Fixed a deprecation warning for sorting a dictionary (#763)

Other

  • Updated copyright notices (#762)
  • Updated .JuliaFormatter.toml (#764)

v1.8.0 (July 24, 2024)

Added

  • Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)

Other

  • Documentation improvements and fixes (#747) (#759)

v1.7.0 (June 4, 2024)

Added

  • Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)

Fixed

  • Fixed error message when publication_plot has non-finite data (#738)

Other

  • Updated the logo constructor (#730)

v1.6.7 (February 1, 2024)

Fixed

  • Fixed non-constant state dimension in the MSPFormat reader (#695)
  • Fixed SimulatorSamplingScheme for deterministic nodes (#710)
  • Fixed line search in BFGS (#711)
  • Fixed handling of NEARLY_FEASIBLE_POINT status (#726)

Other

  • Documentation improvements (#692) (#694) (#706) (#716) (#727)
  • Updated to StochOptFormat v1.0 (#705)
  • Added an experimental OuterApproximation algorithm (#709)
  • Updated .gitignore (#717)
  • Added code for MDP paper (#720) (#721)
  • Added Google analytics (#723)

v1.6.6 (September 29, 2023)

Other

v1.6.5 (September 25, 2023)

Fixed

Other

  • Updated tutorials (#677) (#678) (#682) (#683)
  • Fixed documentation preview (#679)

v1.6.4 (September 23, 2023)

Fixed

Other

  • Documentation updates (#658) (#666) (#671)
  • Switch to GitHub action for deploying docs (#668) (#670)
  • Update to Documenter@1 (#669)

v1.6.3 (September 8, 2023)

Fixed

  • Fixed default stopping rule with iteration_limit or time_limit set (#662)

Other

  • Various documentation improvements (#651) (#657) (#659) (#660)

v1.6.2 (August 24, 2023)

Fixed

  • MSPFormat now detect and exploit stagewise independent lattices (#653)
  • Fixed set_optimizer for models read from file (#654)

Other

  • Fixed typo in pglib_opf.jl (#647)
  • Fixed documentation build and added color (#652)

v1.6.1 (July 20, 2023)

Fixed

  • Fixed bugs in MSPFormat reader (#638) (#639)

Other

  • Clarified OutOfSampleMonteCarlo docstring (#643)

v1.6.0 (July 3, 2023)

Added

Other

v1.5.1 (June 30, 2023)

This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a "good" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.

Other

  • Fixed various typos in the documentation (#617)
  • Fixed printing test after changes in JuMP (#618)
  • Set SimulationStoppingRule as the default stopping rule (#619)
  • Changed the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)
  • Added example usage with Distributions.jl (@slwu89) (#622)
  • Removed the numerical issue @warn (#627)
  • Improved the quality of docstrings (#630)

v1.5.0 (May 14, 2023)

Added

  • Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)

Other

  • Updated missing changelog entries (#608)
  • Removed global variables (#610)
  • Converted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)
  • Fixed some typos (#613)

v1.4.0 (May 8, 2023)

Added

Fixed

  • Fixed parsing of some MSPFormat files (#602) (#604)
  • Fixed printing in header (#605)

v1.3.0 (May 3, 2023)

Added

  • Added experimental support for SDDP.MSPFormat.read_from_file (#593)

Other

  • Updated to StochOptFormat v0.3 (#600)

v1.2.1 (May 1, 2023)

Fixed

  • Fixed log_every_seconds (#597)

v1.2.0 (May 1, 2023)

Added

Other

  • Tweaked how the log is printed (#588)
  • Updated to StochOptFormat v0.2 (#592)

v1.1.4 (April 10, 2023)

Fixed

  • Logs are now flushed every iteration (#584)

Other

  • Added docstrings to various functions (#581)
  • Minor documentation updates (#580)
  • Clarified integrality documentation (#582)
  • Updated the README (#585)
  • Number of numerical issues is now printed to the log (#586)

v1.1.3 (April 2, 2023)

Other

v1.1.2 (March 18, 2023)

Other

v1.1.1 (March 16, 2023)

Other

  • Fixed email in Project.toml
  • Added notebook to documentation tutorials (#571)

v1.1.0 (January 12, 2023)

Added

v1.0.0 (January 3, 2023)

Although we're bumping MAJOR version, this is a non-breaking release. Going forward:

  • New features will bump the MINOR version
  • Bug fixes, maintenance, and documentation updates will bump the PATCH version
  • We will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version
  • Updates to the compat bounds of package dependencies will bump the PATCH version.

We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.

Added

Other

  • Updated Plotting tools to use live plots (#563)
  • Added vale as a linter (#565)
  • Improved documentation for initializing a parallel scheme (#566)

v0.4.9 (January 3, 2023)

Added

Other

  • Added tutorial on Markov Decision Processes (#556)
  • Added two-stage newsvendor tutorial (#557)
  • Refactored the layout of the documentation (#554) (#555)
  • Updated copyright to 2023 (#558)
  • Fixed errors in the documentation (#561)

v0.4.8 (December 19, 2022)

Added

Fixed

  • Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)

v0.4.7 (December 17, 2022)

Added

  • Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)

Fixed

  • Rethrow InterruptException when solver is interrupted (#534)
  • Fixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)
  • Fixed re-using the dashboard = true option between solves (#538)
  • Fixed bug when no @stageobjective is set (now defaults to 0.0) (#539)
  • Fixed errors thrown when invalid inputs are provided to add_objective_state (#540)

Other

  • Drop support for Julia versions prior to 1.6 (#533)
  • Updated versions of dependencies (#522) (#533)
  • Switched to HiGHS in the documentation and tests (#533)
  • Added license headers (#519)
  • Fixed link in air conditioning example (#521) (Thanks @conema)
  • Clarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)
  • Added this change log (#536)
  • Cuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)

v0.4.6 (March 25, 2022)

Other

  • Updated to JuMP v1.0 (#517)

v0.4.5 (March 9, 2022)

Fixed

  • Fixed issue with set_silent in a subproblem (#510)

Other

  • Fixed many typos (#500) (#501) (#506) (#511) (Thanks @bfpc)
  • Update to JuMP v0.23 (#514)
  • Added auto-regressive tutorial (#507)

v0.4.4 (December 11, 2021)

Added

  • Added BanditDuality (#471)
  • Added benchmark scripts (#475) (#476) (#490)
  • write_cuts_to_file now saves visited states (#468)

Fixed

  • Fixed BoundStalling in a deterministic policy (#470) (#474)
  • Fixed magnitude warning with zero coefficients (#483)

Other

  • Improvements to LagrangianDuality (#481) (#482) (#487)
  • Improvements to StrengthenedConicDuality (#486)
  • Switch to functional form for the tests (#478)
  • Fixed typos (#472) (Thanks @vfdev-5)
  • Update to JuMP v0.22 (#498)

v0.4.3 (August 31, 2021)

Added

  • Added biobjective solver (#462)
  • Added forward_pass_callback (#466)

Other

  • Update tutorials and documentation (#459) (#465)
  • Organize how paper materials are stored (#464)

v0.4.2 (August 24, 2021)

Fixed

  • Fixed a bug in Lagrangian duality (#457)

v0.4.1 (August 23, 2021)

Other

  • Minor changes to our implementation of LagrangianDuality (#454) (#455)

v0.4.0 (August 17, 2021)

Breaking

  • A large refactoring for how we handle stochastic integer programs. This added support for things like SDDP.ContinuousConicDuality and SDDP.LagrangianDuality. It was breaking because we removed the integrality_handler argument to PolicyGraph. (#449) (#453)

Other

  • Documentation improvements (#447) (#448) (#450)

v0.3.17 (July 6, 2021)

Added

Other

  • Display more model attributes (#438)
  • Documentation improvements (#433) (#437) (#439)

v0.3.16 (June 17, 2021)

Added

Other

  • Update risk measure docstrings (#418)

v0.3.15 (June 1, 2021)

Added

Fixed

  • Fixed scoping bug in SDDP.@stageobjective (#407)
  • Fixed a bug when the initial point is infeasible (#411)
  • Set subproblems to silent by default (#409)

Other

  • Add JuliaFormatter (#412)
  • Documentation improvements (#406) (#408)

v0.3.14 (March 30, 2021)

Fixed

  • Fixed O(N^2) behavior in get_same_children (#393)

v0.3.13 (March 27, 2021)

Fixed

  • Fixed bug in print.jl
  • Fixed compat of Reexport (#388)

v0.3.12 (March 22, 2021)

Added

  • Added problem statistics to header (#385) (#386)

Fixed

  • Fixed subtypes in visualization (#384)

v0.3.11 (March 22, 2021)

Fixed

  • Fixed constructor in direct mode (#383)

Other

  • Fix documentation (#379)

v0.3.10 (February 23, 2021)

Fixed

  • Fixed seriescolor in publication plot (#376)

v0.3.9 (February 20, 2021)

Added

  • Add option to simulate with different incoming state (#372)
  • Added warning for cuts with high dynamic range (#373)

Fixed

  • Fixed seriesalpha in publication plot (#375)

v0.3.8 (January 19, 2021)

Other

  • Documentation improvements (#367) (#369) (#370)

v0.3.7 (January 8, 2021)

Other

  • Documentation improvements (#362) (#363) (#365) (#366)
  • Bump copyright (#364)

v0.3.6 (December 17, 2020)

Other

  • Fix typos (#358)
  • Collapse navigation bar in docs (#359)
  • Update TagBot.yml (#361)

v0.3.5 (November 18, 2020)

Other

  • Update citations (#348)
  • Switch to GitHub actions (#355)

v0.3.4 (August 25, 2020)

Added

  • Added non-uniform distributionally robust risk measure (#328)
  • Added numerical recovery functions (#330)
  • Added experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)
  • Added entropic risk measure (#347)

Other

  • Documentation improvements (#327) (#333) (#339) (#340)

v0.3.3 (June 19, 2020)

Added

  • Added asynchronous support for price and belief states (#325)
  • Added ForwardPass plug-in system (#320)

Fixed

  • Fix check for probabilities in Markovian graph (#322)

v0.3.2 (April 6, 2020)

Added

Other

  • Improve error message in deterministic equivalent (#312)
  • Update to RecipesBase 1.0 (#313)

v0.3.1 (February 26, 2020)

Fixed

  • Fixed filename in integrality_handlers.jl (#304)

v0.3.0 (February 20, 2020)

Breaking

  • Breaking changes to update to JuMP v0.21 (#300).

v0.2.4 (February 7, 2020)

Added

  • Added a counter for the number of total subproblem solves (#301)

Other

  • Update formatter (#298)
  • Added tests (#299)

v0.2.3 (January 24, 2020)

Added

  • Added support for convex risk measures (#294)

Fixed

  • Fixed bug when subproblem is infeasible (#296)
  • Fixed bug in deterministic equivalent (#297)

Other

  • Added example from IJOC paper (#293)

v0.2.2 (January 10, 2020)

Fixed

  • Fixed flakey time limit in tests (#291)

Other

  • Removed MathOptFormat.jl (#289)
  • Update copyright (#290)

v0.2.1 (December 19, 2019)

Added

  • Added support for approximating a Markov lattice (#282) (#285)
  • Add tools for visualizing the value function (#272) (#286)
  • Write .mof.json files on error (#284)

Other

  • Improve documentation (#281) (#283)
  • Update tests for Julia 1.3 (#287)

v0.2.0 (December 16, 2019)

This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.

Added

  • Added asynchronous parallel implementation (#277)
  • Added roll-out algorithm for cyclic graphs (#279)

Other

  • Improved error messages in PolicyGraph (#271)
  • Added JuliaFormatter (#273) (#276)
  • Fixed compat bounds (#274) (#278)
  • Added documentation for simulating non-standard graphs (#280)

v0.1.0 (October 17, 2019)

A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.

Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.

The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.

v0.0.1 (April 18, 2018)

Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.

+

Release notes

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

v1.10.1 (November 28, 2024)

Fixed

Other

  • Documentation updates (#801)

v1.10.0 (November 19, 2024)

Added

  • Added root_node_risk_measure keyword to train (#804)

Fixed

  • Fixed a bug with cut sharing in a graph with zero-probability arcs (#797)

Other

v1.9.0 (October 17, 2024)

Added

Fixed

  • Fixed the tests to skip threading tests if running in serial (#770)
  • Fixed BanditDuality to handle the case where the standard deviation is NaN (#779)
  • Fixed an error when lagged state variables are encountered in MSPFormat (#786)
  • Fixed publication_plot with replications of different lengths (#788)
  • Fixed CTRL+C interrupting the code at unsafe points (#789)

Other

  • Documentation improvements (#771) (#772)
  • Updated printing because of changes in JuMP (#773)

v1.8.1 (August 5, 2024)

Fixed

  • Fixed various issues with SDDP.Threaded() (#761)
  • Fixed a deprecation warning for sorting a dictionary (#763)

Other

  • Updated copyright notices (#762)
  • Updated .JuliaFormatter.toml (#764)

v1.8.0 (July 24, 2024)

Added

  • Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)

Other

  • Documentation improvements and fixes (#747) (#759)

v1.7.0 (June 4, 2024)

Added

  • Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)

Fixed

  • Fixed error message when publication_plot has non-finite data (#738)

Other

  • Updated the logo constructor (#730)

v1.6.7 (February 1, 2024)

Fixed

  • Fixed non-constant state dimension in the MSPFormat reader (#695)
  • Fixed SimulatorSamplingScheme for deterministic nodes (#710)
  • Fixed line search in BFGS (#711)
  • Fixed handling of NEARLY_FEASIBLE_POINT status (#726)

Other

  • Documentation improvements (#692) (#694) (#706) (#716) (#727)
  • Updated to StochOptFormat v1.0 (#705)
  • Added an experimental OuterApproximation algorithm (#709)
  • Updated .gitignore (#717)
  • Added code for MDP paper (#720) (#721)
  • Added Google analytics (#723)

v1.6.6 (September 29, 2023)

Other

v1.6.5 (September 25, 2023)

Fixed

Other

  • Updated tutorials (#677) (#678) (#682) (#683)
  • Fixed documentation preview (#679)

v1.6.4 (September 23, 2023)

Fixed

Other

  • Documentation updates (#658) (#666) (#671)
  • Switch to GitHub action for deploying docs (#668) (#670)
  • Update to Documenter@1 (#669)

v1.6.3 (September 8, 2023)

Fixed

  • Fixed default stopping rule with iteration_limit or time_limit set (#662)

Other

  • Various documentation improvements (#651) (#657) (#659) (#660)

v1.6.2 (August 24, 2023)

Fixed

  • MSPFormat now detect and exploit stagewise independent lattices (#653)
  • Fixed set_optimizer for models read from file (#654)

Other

  • Fixed typo in pglib_opf.jl (#647)
  • Fixed documentation build and added color (#652)

v1.6.1 (July 20, 2023)

Fixed

  • Fixed bugs in MSPFormat reader (#638) (#639)

Other

  • Clarified OutOfSampleMonteCarlo docstring (#643)

v1.6.0 (July 3, 2023)

Added

Other

v1.5.1 (June 30, 2023)

This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a "good" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.

Other

  • Fixed various typos in the documentation (#617)
  • Fixed printing test after changes in JuMP (#618)
  • Set SimulationStoppingRule as the default stopping rule (#619)
  • Changed the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)
  • Added example usage with Distributions.jl (@slwu89) (#622)
  • Removed the numerical issue @warn (#627)
  • Improved the quality of docstrings (#630)

v1.5.0 (May 14, 2023)

Added

  • Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)

Other

  • Updated missing changelog entries (#608)
  • Removed global variables (#610)
  • Converted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)
  • Fixed some typos (#613)

v1.4.0 (May 8, 2023)

Added

Fixed

  • Fixed parsing of some MSPFormat files (#602) (#604)
  • Fixed printing in header (#605)

v1.3.0 (May 3, 2023)

Added

  • Added experimental support for SDDP.MSPFormat.read_from_file (#593)

Other

  • Updated to StochOptFormat v0.3 (#600)

v1.2.1 (May 1, 2023)

Fixed

  • Fixed log_every_seconds (#597)

v1.2.0 (May 1, 2023)

Added

Other

  • Tweaked how the log is printed (#588)
  • Updated to StochOptFormat v0.2 (#592)

v1.1.4 (April 10, 2023)

Fixed

  • Logs are now flushed every iteration (#584)

Other

  • Added docstrings to various functions (#581)
  • Minor documentation updates (#580)
  • Clarified integrality documentation (#582)
  • Updated the README (#585)
  • Number of numerical issues is now printed to the log (#586)

v1.1.3 (April 2, 2023)

Other

v1.1.2 (March 18, 2023)

Other

v1.1.1 (March 16, 2023)

Other

  • Fixed email in Project.toml
  • Added notebook to documentation tutorials (#571)

v1.1.0 (January 12, 2023)

Added

v1.0.0 (January 3, 2023)

Although we're bumping MAJOR version, this is a non-breaking release. Going forward:

  • New features will bump the MINOR version
  • Bug fixes, maintenance, and documentation updates will bump the PATCH version
  • We will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version
  • Updates to the compat bounds of package dependencies will bump the PATCH version.

We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.

Added

Other

  • Updated Plotting tools to use live plots (#563)
  • Added vale as a linter (#565)
  • Improved documentation for initializing a parallel scheme (#566)

v0.4.9 (January 3, 2023)

Added

Other

  • Added tutorial on Markov Decision Processes (#556)
  • Added two-stage newsvendor tutorial (#557)
  • Refactored the layout of the documentation (#554) (#555)
  • Updated copyright to 2023 (#558)
  • Fixed errors in the documentation (#561)

v0.4.8 (December 19, 2022)

Added

Fixed

  • Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)

v0.4.7 (December 17, 2022)

Added

  • Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)

Fixed

  • Rethrow InterruptException when solver is interrupted (#534)
  • Fixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)
  • Fixed re-using the dashboard = true option between solves (#538)
  • Fixed bug when no @stageobjective is set (now defaults to 0.0) (#539)
  • Fixed errors thrown when invalid inputs are provided to add_objective_state (#540)

Other

  • Drop support for Julia versions prior to 1.6 (#533)
  • Updated versions of dependencies (#522) (#533)
  • Switched to HiGHS in the documentation and tests (#533)
  • Added license headers (#519)
  • Fixed link in air conditioning example (#521) (Thanks @conema)
  • Clarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)
  • Added this change log (#536)
  • Cuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)

v0.4.6 (March 25, 2022)

Other

  • Updated to JuMP v1.0 (#517)

v0.4.5 (March 9, 2022)

Fixed

  • Fixed issue with set_silent in a subproblem (#510)

Other

  • Fixed many typos (#500) (#501) (#506) (#511) (Thanks @bfpc)
  • Update to JuMP v0.23 (#514)
  • Added auto-regressive tutorial (#507)

v0.4.4 (December 11, 2021)

Added

  • Added BanditDuality (#471)
  • Added benchmark scripts (#475) (#476) (#490)
  • write_cuts_to_file now saves visited states (#468)

Fixed

  • Fixed BoundStalling in a deterministic policy (#470) (#474)
  • Fixed magnitude warning with zero coefficients (#483)

Other

  • Improvements to LagrangianDuality (#481) (#482) (#487)
  • Improvements to StrengthenedConicDuality (#486)
  • Switch to functional form for the tests (#478)
  • Fixed typos (#472) (Thanks @vfdev-5)
  • Update to JuMP v0.22 (#498)

v0.4.3 (August 31, 2021)

Added

  • Added biobjective solver (#462)
  • Added forward_pass_callback (#466)

Other

  • Update tutorials and documentation (#459) (#465)
  • Organize how paper materials are stored (#464)

v0.4.2 (August 24, 2021)

Fixed

  • Fixed a bug in Lagrangian duality (#457)

v0.4.1 (August 23, 2021)

Other

  • Minor changes to our implementation of LagrangianDuality (#454) (#455)

v0.4.0 (August 17, 2021)

Breaking

  • A large refactoring for how we handle stochastic integer programs. This added support for things like SDDP.ContinuousConicDuality and SDDP.LagrangianDuality. It was breaking because we removed the integrality_handler argument to PolicyGraph. (#449) (#453)

Other

  • Documentation improvements (#447) (#448) (#450)

v0.3.17 (July 6, 2021)

Added

Other

  • Display more model attributes (#438)
  • Documentation improvements (#433) (#437) (#439)

v0.3.16 (June 17, 2021)

Added

Other

  • Update risk measure docstrings (#418)

v0.3.15 (June 1, 2021)

Added

Fixed

  • Fixed scoping bug in SDDP.@stageobjective (#407)
  • Fixed a bug when the initial point is infeasible (#411)
  • Set subproblems to silent by default (#409)

Other

  • Add JuliaFormatter (#412)
  • Documentation improvements (#406) (#408)

v0.3.14 (March 30, 2021)

Fixed

  • Fixed O(N^2) behavior in get_same_children (#393)

v0.3.13 (March 27, 2021)

Fixed

  • Fixed bug in print.jl
  • Fixed compat of Reexport (#388)

v0.3.12 (March 22, 2021)

Added

  • Added problem statistics to header (#385) (#386)

Fixed

  • Fixed subtypes in visualization (#384)

v0.3.11 (March 22, 2021)

Fixed

  • Fixed constructor in direct mode (#383)

Other

  • Fix documentation (#379)

v0.3.10 (February 23, 2021)

Fixed

  • Fixed seriescolor in publication plot (#376)

v0.3.9 (February 20, 2021)

Added

  • Add option to simulate with different incoming state (#372)
  • Added warning for cuts with high dynamic range (#373)

Fixed

  • Fixed seriesalpha in publication plot (#375)

v0.3.8 (January 19, 2021)

Other

  • Documentation improvements (#367) (#369) (#370)

v0.3.7 (January 8, 2021)

Other

  • Documentation improvements (#362) (#363) (#365) (#366)
  • Bump copyright (#364)

v0.3.6 (December 17, 2020)

Other

  • Fix typos (#358)
  • Collapse navigation bar in docs (#359)
  • Update TagBot.yml (#361)

v0.3.5 (November 18, 2020)

Other

  • Update citations (#348)
  • Switch to GitHub actions (#355)

v0.3.4 (August 25, 2020)

Added

  • Added non-uniform distributionally robust risk measure (#328)
  • Added numerical recovery functions (#330)
  • Added experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)
  • Added entropic risk measure (#347)

Other

  • Documentation improvements (#327) (#333) (#339) (#340)

v0.3.3 (June 19, 2020)

Added

  • Added asynchronous support for price and belief states (#325)
  • Added ForwardPass plug-in system (#320)

Fixed

  • Fix check for probabilities in Markovian graph (#322)

v0.3.2 (April 6, 2020)

Added

Other

  • Improve error message in deterministic equivalent (#312)
  • Update to RecipesBase 1.0 (#313)

v0.3.1 (February 26, 2020)

Fixed

  • Fixed filename in integrality_handlers.jl (#304)

v0.3.0 (February 20, 2020)

Breaking

  • Breaking changes to update to JuMP v0.21 (#300).

v0.2.4 (February 7, 2020)

Added

  • Added a counter for the number of total subproblem solves (#301)

Other

  • Update formatter (#298)
  • Added tests (#299)

v0.2.3 (January 24, 2020)

Added

  • Added support for convex risk measures (#294)

Fixed

  • Fixed bug when subproblem is infeasible (#296)
  • Fixed bug in deterministic equivalent (#297)

Other

  • Added example from IJOC paper (#293)

v0.2.2 (January 10, 2020)

Fixed

  • Fixed flakey time limit in tests (#291)

Other

  • Removed MathOptFormat.jl (#289)
  • Update copyright (#290)

v0.2.1 (December 19, 2019)

Added

  • Added support for approximating a Markov lattice (#282) (#285)
  • Add tools for visualizing the value function (#272) (#286)
  • Write .mof.json files on error (#284)

Other

  • Improve documentation (#281) (#283)
  • Update tests for Julia 1.3 (#287)

v0.2.0 (December 16, 2019)

This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.

Added

  • Added asynchronous parallel implementation (#277)
  • Added roll-out algorithm for cyclic graphs (#279)

Other

  • Improved error messages in PolicyGraph (#271)
  • Added JuliaFormatter (#273) (#276)
  • Fixed compat bounds (#274) (#278)
  • Added documentation for simulating non-standard graphs (#280)

v0.1.0 (October 17, 2019)

A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.

Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.

The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.

v0.0.1 (April 18, 2018)

Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.

diff --git a/previews/PR810/examples/FAST_hydro_thermal/index.html b/previews/PR810/examples/FAST_hydro_thermal/index.html index d6637958c..20f4780a9 100644 --- a/previews/PR810/examples/FAST_hydro_thermal/index.html +++ b/previews/PR810/examples/FAST_hydro_thermal/index.html @@ -66,13 +66,13 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 -1.000000e+01 2.663136e-03 5 1 - 20 0.000000e+00 -1.000000e+01 1.443911e-02 104 1 + 1 0.000000e+00 -1.000000e+01 2.791882e-03 5 1 + 20 0.000000e+00 -1.000000e+01 1.502991e-02 104 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.443911e-02 +total time (s) : 1.502991e-02 total solves : 104 best bound : -1.000000e+01 simulation ci : -9.000000e+00 ± 4.474009e+00 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/FAST_production_management/index.html b/previews/PR810/examples/FAST_production_management/index.html index 941c8e9bd..0526b8e22 100644 --- a/previews/PR810/examples/FAST_production_management/index.html +++ b/previews/PR810/examples/FAST_production_management/index.html @@ -35,4 +35,4 @@ end fast_production_management(; cut_type = SDDP.SINGLE_CUT) -fast_production_management(; cut_type = SDDP.MULTI_CUT)
Test Passed
+fast_production_management(; cut_type = SDDP.MULTI_CUT)
Test Passed
diff --git a/previews/PR810/examples/FAST_quickstart/index.html b/previews/PR810/examples/FAST_quickstart/index.html index caeae1533..05be55589 100644 --- a/previews/PR810/examples/FAST_quickstart/index.html +++ b/previews/PR810/examples/FAST_quickstart/index.html @@ -33,4 +33,4 @@ @test SDDP.calculate_bound(model) == -2 end -fast_quickstart()
Test Passed
+fast_quickstart()
Test Passed
diff --git a/previews/PR810/examples/Hydro_thermal/index.html b/previews/PR810/examples/Hydro_thermal/index.html index 854a06996..b857e178b 100644 --- a/previews/PR810/examples/Hydro_thermal/index.html +++ b/previews/PR810/examples/Hydro_thermal/index.html @@ -59,15 +59,15 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.390000e+02 6.304440e+01 1.046140e-01 183 1 - 33 9.486954e+02 2.352911e+02 1.240269e+00 8715 1 - 57 2.078912e+02 2.362690e+02 2.269883e+00 14703 1 - 73 5.064679e+02 2.363982e+02 3.289509e+00 20271 1 - 92 9.250459e+01 2.364272e+02 4.299796e+00 25200 1 - 100 1.135002e+02 2.364293e+02 4.608137e+00 26640 1 + 1 2.390000e+02 6.304440e+01 1.102281e-01 183 1 + 31 8.517170e+02 2.346450e+02 1.146162e+00 7701 1 + 52 2.121585e+02 2.361937e+02 2.149988e+00 13596 1 + 69 3.584855e+02 2.363821e+02 3.161391e+00 18603 1 + 78 3.060172e+02 2.364187e+02 4.170101e+00 22830 1 + 100 1.135002e+02 2.364293e+02 5.129353e+00 26640 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.608137e+00 +total time (s) : 5.129353e+00 total solves : 26640 best bound : 2.364293e+02 simulation ci : 2.593398e+02 ± 5.186931e+01 @@ -75,4 +75,4 @@ -------------------------------------------------------------------

Simulating the policy

After training, we can simulate the policy using SDDP.simulate.

sims = SDDP.simulate(model, 100, [:g_t])
 mu = round(mean([s[1][:g_t] for s in sims]); digits = 2)
 println("On average, $(mu) units of thermal are used in the first stage.")
On average, 1.71 units of thermal are used in the first stage.

Extracting the water values

Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.

V = SDDP.ValueFunction(model[1])
-cost, price = SDDP.evaluate(V; x = 10)
(233.55074662683333, Dict(:x => -0.6602685305287201))
+cost, price = SDDP.evaluate(V; x = 10)
(233.55074662683333, Dict(:x => -0.6602685305287201))
diff --git a/previews/PR810/examples/SDDP.log b/previews/PR810/examples/SDDP.log index 06f95588b..0b9df4e5b 100644 --- a/previews/PR810/examples/SDDP.log +++ b/previews/PR810/examples/SDDP.log @@ -25,11 +25,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 -1.000000e+01 2.663136e-03 5 1 - 20 0.000000e+00 -1.000000e+01 1.443911e-02 104 1 + 1 0.000000e+00 -1.000000e+01 2.791882e-03 5 1 + 20 0.000000e+00 -1.000000e+01 1.502991e-02 104 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.443911e-02 +total time (s) : 1.502991e-02 total solves : 104 best bound : -1.000000e+01 simulation ci : -9.000000e+00 ± 4.474009e+00 @@ -61,17 +61,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 -2.396000e+01 -2.396000e+01 6.890774e-03 52 1 - 10 -4.260000e+01 -2.396000e+01 1.044893e-02 92 1 - 15 -4.260000e+01 -2.396000e+01 1.425695e-02 132 1 - 20 -4.260000e+01 -2.396000e+01 1.828694e-02 172 1 - 25 -2.396000e+01 -2.396000e+01 2.341294e-02 224 1 - 30 -4.260000e+01 -2.396000e+01 2.792478e-02 264 1 - 35 -2.396000e+01 -2.396000e+01 3.272796e-02 304 1 - 40 -2.396000e+01 -2.396000e+01 3.784800e-02 344 1 + 5 -2.396000e+01 -2.396000e+01 7.454872e-03 52 1 + 10 -4.260000e+01 -2.396000e+01 1.116586e-02 92 1 + 15 -4.260000e+01 -2.396000e+01 1.504087e-02 132 1 + 20 -4.260000e+01 -2.396000e+01 1.919794e-02 172 1 + 25 -2.396000e+01 -2.396000e+01 2.458096e-02 224 1 + 30 -4.260000e+01 -2.396000e+01 2.925801e-02 264 1 + 35 -2.396000e+01 -2.396000e+01 3.422594e-02 304 1 + 40 -2.396000e+01 -2.396000e+01 3.947401e-02 344 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.784800e-02 +total time (s) : 3.947401e-02 total solves : 344 best bound : -2.396000e+01 simulation ci : -2.660914e+01 ± 3.908038e+00 @@ -81,21 +81,21 @@ numeric issues : 0 ──────────────────────────────────────────────────────────────────────────────── Time Allocations ─────────────────────── ──────────────────────── - Tot / % measured: 45.4ms / 73.6% 32.9MiB / 20.7% + Tot / % measured: 47.3ms / 73.6% 32.8MiB / 20.6% Section ncalls time %tot avg alloc %tot avg ──────────────────────────────────────────────────────────────────────────────── -backward_pass 40 20.5ms 61.2% 512μs 5.82MiB 85.7% 149KiB - solve_subproblem 160 11.7ms 35.1% 73.3μs 871KiB 12.5% 5.44KiB - get_dual_solution 160 549μs 1.6% 3.43μs 190KiB 2.7% 1.19KiB - prepare_backward... 160 27.7μs 0.1% 173ns 0.00B 0.0% 0.00B -forward_pass 40 7.77ms 23.3% 194μs 768KiB 11.0% 19.2KiB - solve_subproblem 120 6.92ms 20.7% 57.7μs 588KiB 8.4% 4.90KiB - get_dual_solution 120 71.6μs 0.2% 597ns 16.9KiB 0.2% 144B - sample_scenario 40 136μs 0.4% 3.39μs 24.5KiB 0.4% 628B -calculate_bound 40 5.16ms 15.4% 129μs 224KiB 3.2% 5.61KiB - get_dual_solution 40 32.4μs 0.1% 809ns 5.62KiB 0.1% 144B -get_dual_solution 36 20.2μs 0.1% 561ns 5.06KiB 0.1% 144B +backward_pass 40 21.1ms 60.7% 527μs 5.79MiB 85.6% 148KiB + solve_subproblem 160 12.0ms 34.6% 75.2μs 871KiB 12.6% 5.44KiB + get_dual_solution 160 576μs 1.7% 3.60μs 190KiB 2.7% 1.19KiB + prepare_backward... 160 28.6μs 0.1% 179ns 0.00B 0.0% 0.00B +forward_pass 40 8.10ms 23.3% 203μs 768KiB 11.1% 19.2KiB + solve_subproblem 120 7.20ms 20.7% 60.0μs 588KiB 8.5% 4.90KiB + get_dual_solution 120 74.1μs 0.2% 617ns 16.9KiB 0.2% 144B + sample_scenario 40 134μs 0.4% 3.35μs 24.5KiB 0.4% 628B +calculate_bound 40 5.55ms 16.0% 139μs 224KiB 3.2% 5.61KiB + get_dual_solution 40 32.0μs 0.1% 800ns 5.62KiB 0.1% 144B +get_dual_solution 36 22.1μs 0.1% 614ns 5.06KiB 0.1% 144B ──────────────────────────────────────────────────────────────────────────────── ------------------------------------------------------------------- @@ -123,17 +123,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 -5.320000e+00 -2.396000e+01 7.283926e-03 52 1 - 10 -5.320000e+00 -2.396000e+01 1.131582e-02 92 1 - 15 -2.396000e+01 -2.396000e+01 1.582789e-02 132 1 - 20 -5.320000e+00 -2.396000e+01 2.079296e-02 172 1 - 25 -4.260000e+01 -2.396000e+01 2.714682e-02 224 1 - 30 -2.396000e+01 -2.396000e+01 3.319788e-02 264 1 - 35 -2.396000e+01 -2.396000e+01 3.973484e-02 304 1 - 40 -2.396000e+01 -2.396000e+01 4.689884e-02 344 1 + 5 -5.320000e+00 -2.396000e+01 7.560015e-03 52 1 + 10 -5.320000e+00 -2.396000e+01 1.166797e-02 92 1 + 15 -2.396000e+01 -2.396000e+01 1.627493e-02 132 1 + 20 -5.320000e+00 -2.396000e+01 2.146912e-02 172 1 + 25 -4.260000e+01 -2.396000e+01 2.804804e-02 224 1 + 30 -2.396000e+01 -2.396000e+01 3.438210e-02 264 1 + 35 -2.396000e+01 -2.396000e+01 4.120708e-02 304 1 + 40 -2.396000e+01 -2.396000e+01 4.858994e-02 344 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.689884e-02 +total time (s) : 4.858994e-02 total solves : 344 best bound : -2.396000e+01 simulation ci : -1.957570e+01 ± 3.890802e+00 @@ -143,21 +143,21 @@ numeric issues : 0 ──────────────────────────────────────────────────────────────────────────────── Time Allocations ─────────────────────── ──────────────────────── - Tot / % measured: 81.6ms / 52.0% 38.7MiB / 32.8% + Tot / % measured: 96.0ms / 45.5% 38.7MiB / 32.8% Section ncalls time %tot avg alloc %tot avg ──────────────────────────────────────────────────────────────────────────────── -backward_pass 40 28.6ms 67.5% 716μs 11.7MiB 92.3% 300KiB - solve_subproblem 160 12.2ms 28.7% 76.2μs 872KiB 6.7% 5.45KiB - get_dual_solution 160 573μs 1.4% 3.58μs 190KiB 1.5% 1.19KiB - prepare_backward... 160 27.9μs 0.1% 175ns 0.00B 0.0% 0.00B -forward_pass 40 8.04ms 19.0% 201μs 768KiB 5.9% 19.2KiB - solve_subproblem 120 7.15ms 16.9% 59.6μs 588KiB 4.5% 4.90KiB - get_dual_solution 120 69.5μs 0.2% 579ns 16.9KiB 0.1% 144B - sample_scenario 40 146μs 0.3% 3.65μs 24.2KiB 0.2% 620B -calculate_bound 40 5.73ms 13.5% 143μs 226KiB 1.7% 5.66KiB - get_dual_solution 40 33.6μs 0.1% 840ns 5.62KiB 0.0% 144B -get_dual_solution 36 19.5μs 0.0% 543ns 5.06KiB 0.0% 144B +backward_pass 40 29.6ms 67.8% 740μs 11.7MiB 92.3% 300KiB + solve_subproblem 160 12.5ms 28.5% 77.8μs 872KiB 6.7% 5.45KiB + get_dual_solution 160 594μs 1.4% 3.71μs 190KiB 1.5% 1.19KiB + prepare_backward... 160 29.0μs 0.1% 181ns 0.00B 0.0% 0.00B +forward_pass 40 8.11ms 18.6% 203μs 768KiB 5.9% 19.2KiB + solve_subproblem 120 7.16ms 16.4% 59.7μs 588KiB 4.5% 4.90KiB + get_dual_solution 120 81.3μs 0.2% 678ns 16.9KiB 0.1% 144B + sample_scenario 40 138μs 0.3% 3.46μs 24.2KiB 0.2% 620B +calculate_bound 40 5.92ms 13.6% 148μs 226KiB 1.7% 5.66KiB + get_dual_solution 40 36.5μs 0.1% 913ns 5.62KiB 0.0% 144B +get_dual_solution 36 22.5μs 0.1% 624ns 5.06KiB 0.0% 144B ──────────────────────────────────────────────────────────────────────────────── ------------------------------------------------------------------- @@ -185,49 +185,49 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 -2.500000e+00 2.088070e-03 5 1 - 2 -1.500000e+00 -2.000000e+00 3.046989e-03 14 1 - 3 -1.000000e+00 -2.000000e+00 3.514051e-03 19 1 - 4 -1.000000e+00 -2.000000e+00 4.064083e-03 24 1 - 5 -2.000000e+00 -2.000000e+00 4.692078e-03 29 1 - 6 -2.000000e+00 -2.000000e+00 5.255938e-03 34 1 - 7 -2.000000e+00 -2.000000e+00 5.815983e-03 39 1 - 8 -2.000000e+00 -2.000000e+00 6.387949e-03 44 1 - 9 -2.000000e+00 -2.000000e+00 6.968975e-03 49 1 - 10 -2.000000e+00 -2.000000e+00 7.550955e-03 54 1 - 11 -2.000000e+00 -2.000000e+00 8.137941e-03 59 1 - 12 -2.000000e+00 -2.000000e+00 3.341103e-02 64 1 - 13 -2.000000e+00 -2.000000e+00 3.409100e-02 69 1 - 14 -2.000000e+00 -2.000000e+00 3.471088e-02 74 1 - 15 -2.000000e+00 -2.000000e+00 3.532791e-02 79 1 - 16 -2.000000e+00 -2.000000e+00 3.595495e-02 84 1 - 17 -2.000000e+00 -2.000000e+00 3.659987e-02 89 1 - 18 -2.000000e+00 -2.000000e+00 3.720689e-02 94 1 - 19 -2.000000e+00 -2.000000e+00 3.781796e-02 99 1 - 20 -2.000000e+00 -2.000000e+00 3.844690e-02 104 1 - 21 -2.000000e+00 -2.000000e+00 3.941703e-02 113 1 - 22 -2.000000e+00 -2.000000e+00 4.007101e-02 118 1 - 23 -2.000000e+00 -2.000000e+00 4.073691e-02 123 1 - 24 -2.000000e+00 -2.000000e+00 4.137492e-02 128 1 - 25 -2.000000e+00 -2.000000e+00 4.201698e-02 133 1 - 26 -2.000000e+00 -2.000000e+00 4.265690e-02 138 1 - 27 -2.000000e+00 -2.000000e+00 4.330206e-02 143 1 - 28 -2.000000e+00 -2.000000e+00 4.396486e-02 148 1 - 29 -2.000000e+00 -2.000000e+00 4.465604e-02 153 1 - 30 -2.000000e+00 -2.000000e+00 4.532099e-02 158 1 - 31 -2.000000e+00 -2.000000e+00 4.599285e-02 163 1 - 32 -2.000000e+00 -2.000000e+00 4.669189e-02 168 1 - 33 -2.000000e+00 -2.000000e+00 4.739308e-02 173 1 - 34 -2.000000e+00 -2.000000e+00 4.808497e-02 178 1 - 35 -2.000000e+00 -2.000000e+00 4.882288e-02 183 1 - 36 -2.000000e+00 -2.000000e+00 4.952788e-02 188 1 - 37 -2.000000e+00 -2.000000e+00 5.030203e-02 193 1 - 38 -2.000000e+00 -2.000000e+00 5.101895e-02 198 1 - 39 -2.000000e+00 -2.000000e+00 5.174398e-02 203 1 - 40 -2.000000e+00 -2.000000e+00 5.251098e-02 208 1 + 1 0.000000e+00 -2.500000e+00 2.446890e-03 5 1 + 2 -1.500000e+00 -2.000000e+00 3.638029e-03 14 1 + 3 -1.000000e+00 -2.000000e+00 4.174948e-03 19 1 + 4 -1.000000e+00 -2.000000e+00 4.792929e-03 24 1 + 5 -2.000000e+00 -2.000000e+00 5.443811e-03 29 1 + 6 -2.000000e+00 -2.000000e+00 6.070852e-03 34 1 + 7 -2.000000e+00 -2.000000e+00 6.673813e-03 39 1 + 8 -2.000000e+00 -2.000000e+00 7.333040e-03 44 1 + 9 -2.000000e+00 -2.000000e+00 8.004904e-03 49 1 + 10 -2.000000e+00 -2.000000e+00 8.627892e-03 54 1 + 11 -2.000000e+00 -2.000000e+00 9.289980e-03 59 1 + 12 -2.000000e+00 -2.000000e+00 9.917974e-03 64 1 + 13 -2.000000e+00 -2.000000e+00 1.052284e-02 69 1 + 14 -2.000000e+00 -2.000000e+00 1.113391e-02 74 1 + 15 -2.000000e+00 -2.000000e+00 1.178598e-02 79 1 + 16 -2.000000e+00 -2.000000e+00 1.241088e-02 84 1 + 17 -2.000000e+00 -2.000000e+00 1.305103e-02 89 1 + 18 -2.000000e+00 -2.000000e+00 1.370597e-02 94 1 + 19 -2.000000e+00 -2.000000e+00 1.439500e-02 99 1 + 20 -2.000000e+00 -2.000000e+00 1.506090e-02 104 1 + 21 -2.000000e+00 -2.000000e+00 1.612496e-02 113 1 + 22 -2.000000e+00 -2.000000e+00 1.681995e-02 118 1 + 23 -2.000000e+00 -2.000000e+00 1.749897e-02 123 1 + 24 -2.000000e+00 -2.000000e+00 1.817894e-02 128 1 + 25 -2.000000e+00 -2.000000e+00 1.886582e-02 133 1 + 26 -2.000000e+00 -2.000000e+00 1.957393e-02 138 1 + 27 -2.000000e+00 -2.000000e+00 2.032304e-02 143 1 + 28 -2.000000e+00 -2.000000e+00 2.105188e-02 148 1 + 29 -2.000000e+00 -2.000000e+00 2.177191e-02 153 1 + 30 -2.000000e+00 -2.000000e+00 2.248287e-02 158 1 + 31 -2.000000e+00 -2.000000e+00 2.320004e-02 163 1 + 32 -2.000000e+00 -2.000000e+00 2.395988e-02 168 1 + 33 -2.000000e+00 -2.000000e+00 2.469087e-02 173 1 + 34 -2.000000e+00 -2.000000e+00 2.542090e-02 178 1 + 35 -2.000000e+00 -2.000000e+00 2.615786e-02 183 1 + 36 -2.000000e+00 -2.000000e+00 2.689791e-02 188 1 + 37 -2.000000e+00 -2.000000e+00 2.768397e-02 193 1 + 38 -2.000000e+00 -2.000000e+00 2.843690e-02 198 1 + 39 -2.000000e+00 -2.000000e+00 2.920985e-02 203 1 + 40 -2.000000e+00 -2.000000e+00 2.998590e-02 208 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 5.251098e-02 +total time (s) : 2.998590e-02 total solves : 208 best bound : -2.000000e+00 simulation ci : -1.887500e+00 ± 1.189300e-01 @@ -259,15 +259,15 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.390000e+02 6.304440e+01 1.046140e-01 183 1 - 33 9.486954e+02 2.352911e+02 1.240269e+00 8715 1 - 57 2.078912e+02 2.362690e+02 2.269883e+00 14703 1 - 73 5.064679e+02 2.363982e+02 3.289509e+00 20271 1 - 92 9.250459e+01 2.364272e+02 4.299796e+00 25200 1 - 100 1.135002e+02 2.364293e+02 4.608137e+00 26640 1 + 1 2.390000e+02 6.304440e+01 1.102281e-01 183 1 + 31 8.517170e+02 2.346450e+02 1.146162e+00 7701 1 + 52 2.121585e+02 2.361937e+02 2.149988e+00 13596 1 + 69 3.584855e+02 2.363821e+02 3.161391e+00 18603 1 + 78 3.060172e+02 2.364187e+02 4.170101e+00 22830 1 + 100 1.135002e+02 2.364293e+02 5.129353e+00 26640 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.608137e+00 +total time (s) : 5.129353e+00 total solves : 26640 best bound : 2.364293e+02 simulation ci : 2.593398e+02 ± 5.186931e+01 @@ -300,19 +300,19 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -3.878303e+00 -4.434982e+00 1.972919e-01 1400 1 - 20 -4.262885e+00 -4.399265e+00 3.149569e-01 2800 1 - 30 -3.075162e+00 -4.382527e+00 4.390740e-01 4200 1 - 40 -3.761147e+00 -4.369587e+00 5.740230e-01 5600 1 - 50 -4.323162e+00 -4.362199e+00 7.140150e-01 7000 1 - 60 -3.654943e+00 -4.358401e+00 8.555930e-01 8400 1 - 70 -4.010883e+00 -4.357368e+00 9.986451e-01 9800 1 - 80 -4.314412e+00 -4.355714e+00 1.145507e+00 11200 1 - 90 -4.542422e+00 -4.353708e+00 1.298164e+00 12600 1 - 100 -4.178952e+00 -4.351685e+00 1.446283e+00 14000 1 + 10 -3.878303e+00 -4.434982e+00 1.981149e-01 1400 1 + 20 -4.262885e+00 -4.399265e+00 3.170040e-01 2800 1 + 30 -3.075162e+00 -4.382527e+00 4.417539e-01 4200 1 + 40 -3.761147e+00 -4.369587e+00 5.741251e-01 5600 1 + 50 -4.323162e+00 -4.362199e+00 7.105660e-01 7000 1 + 60 -3.654943e+00 -4.358401e+00 8.499410e-01 8400 1 + 70 -4.010883e+00 -4.357368e+00 9.909499e-01 9800 1 + 80 -4.314412e+00 -4.355714e+00 1.136921e+00 11200 1 + 90 -4.542422e+00 -4.353708e+00 1.287445e+00 12600 1 + 100 -4.178952e+00 -4.351685e+00 1.432129e+00 14000 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.446283e+00 +total time (s) : 1.432129e+00 total solves : 14000 best bound : -4.351685e+00 simulation ci : -4.246786e+00 ± 8.703997e-02 @@ -344,16 +344,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -1.573154e+00 -1.474247e+00 6.858993e-02 1050 1 - 20 -1.346690e+00 -1.471483e+00 1.077039e-01 1600 1 - 30 -1.308031e+00 -1.471307e+00 1.912889e-01 2650 1 - 40 -1.401200e+00 -1.471167e+00 2.330921e-01 3200 1 - 50 -1.557483e+00 -1.471097e+00 3.204689e-01 4250 1 - 60 -1.534169e+00 -1.471075e+00 3.659289e-01 4800 1 - 65 -1.689864e+00 -1.471075e+00 3.889520e-01 5075 1 + 10 -1.573154e+00 -1.474247e+00 6.874084e-02 1050 1 + 20 -1.346690e+00 -1.471483e+00 1.070879e-01 1600 1 + 30 -1.308031e+00 -1.471307e+00 1.898260e-01 2650 1 + 40 -1.401200e+00 -1.471167e+00 2.307410e-01 3200 1 + 50 -1.557483e+00 -1.471097e+00 3.172069e-01 4250 1 + 60 -1.534169e+00 -1.471075e+00 3.619289e-01 4800 1 + 65 -1.689864e+00 -1.471075e+00 3.849900e-01 5075 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.889520e-01 +total time (s) : 3.849900e-01 total solves : 5075 best bound : -1.471075e+00 simulation ci : -1.484094e+00 ± 4.058993e-02 @@ -387,14 +387,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 3.455904e+05 3.147347e+05 8.077145e-03 54 1 - 20 3.336455e+05 3.402383e+05 1.420999e-02 104 1 - 30 3.337559e+05 3.403155e+05 2.143812e-02 158 1 - 40 3.337559e+05 3.403155e+05 2.846503e-02 208 1 - 48 3.337559e+05 3.403155e+05 3.461409e-02 248 1 + 10 3.455904e+05 3.147347e+05 8.325100e-03 54 1 + 20 3.336455e+05 3.402383e+05 1.460004e-02 104 1 + 30 3.337559e+05 3.403155e+05 2.192307e-02 158 1 + 40 3.337559e+05 3.403155e+05 2.894115e-02 208 1 + 48 3.337559e+05 3.403155e+05 3.505611e-02 248 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.461409e-02 +total time (s) : 3.505611e-02 total solves : 248 best bound : 3.403155e+05 simulation ci : 1.351676e+08 ± 1.785770e+08 @@ -429,14 +429,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.403329e+05 3.509666e+05 1.348710e-02 92 1 - 20 4.055335e+05 4.054833e+05 2.436900e-02 172 1 - 30 3.959476e+05 4.067125e+05 3.793120e-02 264 1 - 40 3.959476e+05 4.067125e+05 5.155921e-02 344 1 - 47 3.959476e+05 4.067125e+05 6.194115e-02 400 1 + 10 4.403329e+05 3.509666e+05 1.380706e-02 92 1 + 20 4.055335e+05 4.054833e+05 2.451611e-02 172 1 + 30 3.959476e+05 4.067125e+05 3.781104e-02 264 1 + 40 3.959476e+05 4.067125e+05 5.110097e-02 344 1 + 47 3.959476e+05 4.067125e+05 6.131101e-02 400 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.194115e-02 +total time (s) : 6.131101e-02 total solves : 400 best bound : 4.067125e+05 simulation ci : 2.695623e+07 ± 3.645336e+07 @@ -470,11 +470,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 8.316000e+03 0.000000e+00 9.383893e-02 14 1 - 40 4.716000e+03 4.074139e+03 2.813859e-01 776 1 + 1 8.316000e+03 0.000000e+00 9.661007e-02 14 1 + 40 4.716000e+03 4.074139e+03 2.235489e-01 776 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.813859e-01 +total time (s) : 2.235489e-01 total solves : 776 best bound : 4.074139e+03 simulation ci : 4.477341e+03 ± 6.593738e+02 @@ -507,11 +507,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 7.000000e+04 6.166667e+04 5.692489e-01 8 1 - 40L 5.500000e+04 6.250000e+04 8.180799e-01 344 1 + 1L 7.000000e+04 6.166667e+04 5.793221e-01 8 1 + 40L 5.500000e+04 6.250000e+04 8.212459e-01 344 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 8.180799e-01 +total time (s) : 8.212459e-01 total solves : 344 best bound : 6.250000e+04 simulation ci : 6.091250e+04 ± 6.325667e+03 @@ -544,11 +544,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.000000e+04 6.250000e+04 3.791094e-03 8 1 - 20 6.000000e+04 6.250000e+04 4.381514e-02 172 1 + 1 3.000000e+04 6.250000e+04 3.978014e-03 8 1 + 20 6.000000e+04 6.250000e+04 4.401493e-02 172 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.381514e-02 +total time (s) : 4.401493e-02 total solves : 172 best bound : 6.250000e+04 simulation ci : 5.675000e+04 ± 6.792430e+03 @@ -580,11 +580,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 7.000000e+04 6.250000e+04 5.373001e-03 5 1 - 10 4.000000e+04 6.250000e+04 1.991892e-02 50 1 + 1 7.000000e+04 6.250000e+04 5.740881e-03 5 1 + 10 4.000000e+04 6.250000e+04 2.025580e-02 50 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.991892e-02 +total time (s) : 2.025580e-02 total solves : 50 best bound : 6.250000e+04 simulation ci : 6.300000e+04 ± 1.505505e+04 @@ -617,11 +617,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 6.000000e+00 9.000000e+00 4.085207e-02 6 1 - 20L 9.000000e+00 9.000000e+00 8.351707e-02 123 1 + 1L 6.000000e+00 9.000000e+00 4.043484e-02 6 1 + 20L 9.000000e+00 9.000000e+00 8.175492e-02 123 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 8.351707e-02 +total time (s) : 8.175492e-02 total solves : 123 best bound : 9.000000e+00 simulation ci : 8.850000e+00 ± 2.940000e-01 @@ -653,17 +653,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 -5.684342e-14 1.184830e+00 1.325107e-02 87 1 - 10 5.012507e+01 1.508277e+00 1.979804e-02 142 1 - 15 -1.428571e+00 1.514085e+00 2.706099e-02 197 1 - 20 7.105427e-14 1.514085e+00 3.486300e-02 252 1 - 25 -3.979039e-13 1.514085e+00 9.059000e-02 339 1 - 30 -1.428571e+00 1.514085e+00 9.916711e-02 394 1 - 35 -1.428571e+00 1.514085e+00 1.083140e-01 449 1 - 40 0.000000e+00 1.514085e+00 1.179910e-01 504 1 + 5 -5.684342e-14 1.184830e+00 1.354599e-02 87 1 + 10 5.012507e+01 1.508277e+00 2.029800e-02 142 1 + 15 -1.428571e+00 1.514085e+00 2.757788e-02 197 1 + 20 7.105427e-14 1.514085e+00 3.523993e-02 252 1 + 25 -3.979039e-13 1.514085e+00 9.073496e-02 339 1 + 30 -1.428571e+00 1.514085e+00 9.931684e-02 394 1 + 35 -1.428571e+00 1.514085e+00 1.083419e-01 449 1 + 40 0.000000e+00 1.514085e+00 1.178699e-01 504 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.179910e-01 +total time (s) : 1.178699e-01 total solves : 504 best bound : 1.514085e+00 simulation ci : 2.863132e+00 ± 6.778637e+00 @@ -695,14 +695,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 1.100409e+00 1.301856e+00 1.567600e-01 278 1 - 20 1.263098e+01 1.278410e+00 1.765418e-01 428 1 - 30 -5.003795e+01 1.278410e+00 2.089930e-01 706 1 - 40 6.740000e+00 1.278410e+00 2.322638e-01 856 1 - 44 1.111084e+01 1.278410e+00 2.419748e-01 916 1 + 10 1.100409e+00 1.301856e+00 1.553259e-01 278 1 + 20 1.263098e+01 1.278410e+00 1.751239e-01 428 1 + 30 -5.003795e+01 1.278410e+00 2.076471e-01 706 1 + 40 6.740000e+00 1.278410e+00 2.307570e-01 856 1 + 44 1.111084e+01 1.278410e+00 2.404149e-01 916 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.419748e-01 +total time (s) : 2.404149e-01 total solves : 916 best bound : 1.278410e+00 simulation ci : 4.090025e+00 ± 5.358375e+00 @@ -734,13 +734,13 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 2.007061e+00 1.281639e+00 3.648210e-02 278 1 - 20 1.426676e+01 1.278410e+00 6.433511e-02 428 1 - 30 1.522212e+00 1.278410e+00 1.086230e-01 706 1 - 40 -4.523775e+01 1.278410e+00 1.463411e-01 856 1 + 10 2.007061e+00 1.281639e+00 3.743601e-02 278 1 + 20 1.426676e+01 1.278410e+00 6.516910e-02 428 1 + 30 1.522212e+00 1.278410e+00 1.093900e-01 706 1 + 40 -4.523775e+01 1.278410e+00 1.465080e-01 856 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.463411e-01 +total time (s) : 1.465080e-01 total solves : 856 best bound : 1.278410e+00 simulation ci : 1.019480e+00 ± 6.246418e+00 @@ -774,19 +774,19 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.787277e+00 9.346930e+00 1.412701e+00 900 1 - 20 6.374753e+00 1.361934e+01 1.581513e+00 1720 1 - 30 2.813321e+01 1.651297e+01 1.908548e+00 3036 1 - 40 1.654759e+01 1.632970e+01 2.273542e+00 4192 1 - 50 3.570941e+00 1.846889e+01 2.540469e+00 5020 1 - 60 1.087425e+01 1.890254e+01 2.825104e+00 5808 1 - 70 9.381610e+00 1.940320e+01 3.118844e+00 6540 1 - 80 5.648731e+01 1.962435e+01 3.339116e+00 7088 1 - 90 3.879273e+01 1.981008e+01 3.830556e+00 8180 1 - 100 7.870187e+00 1.997117e+01 4.071501e+00 8664 1 + 10 4.787277e+00 9.346930e+00 1.441149e+00 900 1 + 20 6.374753e+00 1.361934e+01 1.609550e+00 1720 1 + 30 2.813321e+01 1.651297e+01 1.939074e+00 3036 1 + 40 1.654759e+01 1.632970e+01 2.302915e+00 4192 1 + 50 3.570941e+00 1.846889e+01 2.570651e+00 5020 1 + 60 1.087425e+01 1.890254e+01 2.844325e+00 5808 1 + 70 9.381610e+00 1.940320e+01 3.135714e+00 6540 1 + 80 5.648731e+01 1.962435e+01 3.353945e+00 7088 1 + 90 3.879273e+01 1.981008e+01 3.830138e+00 8180 1 + 100 7.870187e+00 1.997117e+01 4.067118e+00 8664 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.071501e+00 +total time (s) : 4.067118e+00 total solves : 8664 best bound : 1.997117e+01 simulation ci : 2.275399e+01 ± 4.541987e+00 @@ -821,17 +821,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 9.000000e+00 9.002950e+00 1.394131e-01 235 1 - 10 4.000000e+00 9.002950e+00 1.608541e-01 310 1 - 15 4.000000e+00 9.002950e+00 1.827869e-01 385 1 - 20 4.000000e+00 9.002950e+00 2.049830e-01 460 1 - 25 1.000000e+01 9.002950e+00 2.811651e-01 695 1 - 30 5.000000e+00 9.002950e+00 3.044569e-01 770 1 - 35 1.000000e+01 9.002950e+00 3.288610e-01 845 1 - 40 5.000000e+00 9.002950e+00 3.534269e-01 920 1 + 5 9.000000e+00 9.002950e+00 1.397161e-01 235 1 + 10 4.000000e+00 9.002950e+00 1.610041e-01 310 1 + 15 4.000000e+00 9.002950e+00 1.826000e-01 385 1 + 20 4.000000e+00 9.002950e+00 2.044320e-01 460 1 + 25 1.000000e+01 9.002950e+00 2.798271e-01 695 1 + 30 5.000000e+00 9.002950e+00 3.024180e-01 770 1 + 35 1.000000e+01 9.002950e+00 3.263021e-01 845 1 + 40 5.000000e+00 9.002950e+00 3.503611e-01 920 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.534269e-01 +total time (s) : 3.503611e-01 total solves : 920 best bound : 9.002950e+00 simulation ci : 6.375000e+00 ± 7.930178e-01 @@ -866,15 +866,15 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 1.000000e+01 6.868919e+00 1.072721e-01 510 1 - 20 2.000000e+00 6.834387e+00 1.617582e-01 720 1 - 30 1.200000e+01 6.834387e+00 3.313892e-01 1230 1 - 40 7.000000e+00 6.823805e+00 3.858352e-01 1440 1 - 50 7.000000e+00 6.823805e+00 5.363340e-01 1950 1 - 60 5.000000e+00 6.823805e+00 5.924511e-01 2160 1 + 10 1.000000e+01 6.868919e+00 1.075020e-01 510 1 + 20 2.000000e+00 6.834387e+00 1.609030e-01 720 1 + 30 1.200000e+01 6.834387e+00 3.048790e-01 1230 1 + 40 7.000000e+00 6.823805e+00 3.587730e-01 1440 1 + 50 7.000000e+00 6.823805e+00 5.341809e-01 1950 1 + 60 5.000000e+00 6.823805e+00 5.889809e-01 2160 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 5.924511e-01 +total time (s) : 5.889809e-01 total solves : 2160 best bound : 6.823805e+00 simulation ci : 6.183333e+00 ± 6.258900e-01 @@ -908,15 +908,15 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 2.549668e+06 2.078257e+06 5.268471e-01 920 1 - 20 5.494568e+05 2.078257e+06 7.221272e-01 1340 1 - 30 4.985879e+04 2.078257e+06 1.268374e+00 2260 1 - 40 3.799447e+06 2.078257e+06 1.469243e+00 2680 1 - 50 1.049867e+06 2.078257e+06 2.024358e+00 3600 1 - 60 3.985191e+04 2.078257e+06 2.228389e+00 4020 1 + 10 2.549668e+06 2.078257e+06 5.136688e-01 920 1 + 20 5.494568e+05 2.078257e+06 7.072258e-01 1340 1 + 30 4.985879e+04 2.078257e+06 1.252619e+00 2260 1 + 40 3.799447e+06 2.078257e+06 1.450223e+00 2680 1 + 50 1.049867e+06 2.078257e+06 2.001015e+00 3600 1 + 60 3.985191e+04 2.078257e+06 2.201725e+00 4020 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.228389e+00 +total time (s) : 2.201725e+00 total solves : 4020 best bound : 2.078257e+06 simulation ci : 2.031697e+06 ± 3.922745e+05 @@ -950,15 +950,15 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10L 4.986663e+04 2.079119e+06 9.470429e-01 920 1 - 20L 3.799878e+06 2.079330e+06 1.769630e+00 1340 1 - 30L 3.003923e+04 2.079457e+06 2.897375e+00 2260 1 - 40L 5.549882e+06 2.079457e+06 3.708309e+00 2680 1 - 50L 2.799466e+06 2.079457e+06 4.901440e+00 3600 1 - 60L 3.549880e+06 2.079457e+06 5.673556e+00 4020 1 + 10L 4.986663e+04 2.079119e+06 9.293642e-01 920 1 + 20L 3.799878e+06 2.079330e+06 1.628068e+00 1340 1 + 30L 3.003923e+04 2.079457e+06 2.727293e+00 2260 1 + 40L 5.549882e+06 2.079457e+06 3.592740e+00 2680 1 + 50L 2.799466e+06 2.079457e+06 4.766337e+00 3600 1 + 60L 3.549880e+06 2.079457e+06 5.530339e+00 4020 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 5.673556e+00 +total time (s) : 5.530339e+00 total solves : 4020 best bound : 2.079457e+06 simulation ci : 2.352204e+06 ± 5.377531e+05 @@ -990,13 +990,13 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 100 2.500000e+01 1.188965e+02 8.112071e-01 1946 1 - 200 2.500000e+01 1.191634e+02 1.018552e+00 3920 1 - 300 0.000000e+00 1.191666e+02 1.229901e+00 5902 1 - 330 2.500000e+01 1.191667e+02 1.271405e+00 6224 1 + 100 2.500000e+01 1.188965e+02 7.920020e-01 1946 1 + 200 2.500000e+01 1.191634e+02 1.002857e+00 3920 1 + 300 0.000000e+00 1.191666e+02 1.212818e+00 5902 1 + 330 2.500000e+01 1.191667e+02 1.254757e+00 6224 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.271405e+00 +total time (s) : 1.254757e+00 total solves : 6224 best bound : 1.191667e+02 simulation ci : 2.158333e+01 ± 3.290252e+00 @@ -1028,12 +1028,12 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 100 0.000000e+00 1.191285e+02 2.937579e-01 2874 1 - 200 2.500000e+00 1.191666e+02 5.252440e-01 4855 1 - 282 7.500000e+00 1.191667e+02 6.570981e-01 5733 1 + 100 0.000000e+00 1.191285e+02 2.993159e-01 2874 1 + 200 2.500000e+00 1.191666e+02 5.335588e-01 4855 1 + 282 7.500000e+00 1.191667e+02 6.701078e-01 5733 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.570981e-01 +total time (s) : 6.701078e-01 total solves : 5733 best bound : 1.191667e+02 simulation ci : 2.104610e+01 ± 3.492245e+00 @@ -1064,13 +1064,13 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.000000e+00 1.997089e+01 6.802011e-02 1204 1 - 20 8.000000e+00 2.000000e+01 8.882618e-02 1420 1 - 30 1.600000e+01 2.000000e+01 1.561842e-01 2628 1 - 40 8.000000e+00 2.000000e+01 1.775842e-01 2834 1 + 10 4.000000e+00 1.997089e+01 7.041693e-02 1204 1 + 20 8.000000e+00 2.000000e+01 9.154296e-02 1420 1 + 30 1.600000e+01 2.000000e+01 1.614270e-01 2628 1 + 40 8.000000e+00 2.000000e+01 1.835001e-01 2834 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.775842e-01 +total time (s) : 1.835001e-01 total solves : 2834 best bound : 2.000000e+01 simulation ci : 1.625000e+01 ± 4.766381e+00 @@ -1101,11 +1101,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.000000e+00 1.500000e+00 1.579046e-03 3 1 - 40 4.000000e+00 2.000000e+00 4.310894e-02 578 1 + 1 1.000000e+00 1.500000e+00 1.657009e-03 3 1 + 40 4.000000e+00 2.000000e+00 4.375005e-02 578 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.310894e-02 +total time (s) : 4.375005e-02 total solves : 578 best bound : 2.000000e+00 simulation ci : 1.950000e+00 ± 5.568095e-01 @@ -1138,138 +1138,137 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 5.250000e+00 4.888859e+00 1.683509e-01 1350 1 - 20 4.350000e+00 4.105855e+00 2.543921e-01 2700 1 - 30 5.000000e+00 4.100490e+00 3.491671e-01 4050 1 - 40 3.500000e+00 4.097376e+00 4.512410e-01 5400 1 - 50 5.250000e+00 4.095859e+00 5.562651e-01 6750 1 - 60 3.643750e+00 4.093342e+00 6.656621e-01 8100 1 - 70 2.643750e+00 4.091818e+00 7.767341e-01 9450 1 - 80 5.087500e+00 4.091591e+00 8.888431e-01 10800 1 - 90 5.062500e+00 4.091309e+00 1.001833e+00 12150 1 - 100 4.843750e+00 4.087004e+00 1.123285e+00 13500 1 - 110 3.437500e+00 4.086094e+00 1.244990e+00 14850 1 - 120 3.375000e+00 4.085926e+00 1.408693e+00 16200 1 - 130 5.025000e+00 4.085866e+00 1.534998e+00 17550 1 - 140 5.000000e+00 4.085734e+00 1.663436e+00 18900 1 - 150 3.500000e+00 4.085655e+00 1.794033e+00 20250 1 - 160 4.281250e+00 4.085454e+00 1.920855e+00 21600 1 - 170 4.562500e+00 4.085425e+00 2.050150e+00 22950 1 - 180 5.768750e+00 4.085425e+00 2.179474e+00 24300 1 - 190 3.468750e+00 4.085359e+00 2.315671e+00 25650 1 - 200 4.131250e+00 4.085225e+00 2.451789e+00 27000 1 - 210 4.512500e+00 4.085157e+00 2.584829e+00 28350 1 - 220 4.900000e+00 4.085153e+00 2.718612e+00 29700 1 - 230 4.025000e+00 4.085134e+00 2.857343e+00 31050 1 - 240 4.468750e+00 4.085116e+00 2.997363e+00 32400 1 - 250 4.062500e+00 4.085075e+00 3.135554e+00 33750 1 - 260 4.875000e+00 4.085037e+00 3.276477e+00 35100 1 - 270 3.850000e+00 4.085011e+00 3.417285e+00 36450 1 - 280 4.912500e+00 4.084992e+00 3.559096e+00 37800 1 - 290 2.987500e+00 4.084986e+00 3.706907e+00 39150 1 - 300 3.825000e+00 4.084957e+00 3.857739e+00 40500 1 - 310 3.250000e+00 4.084911e+00 4.005768e+00 41850 1 - 320 3.600000e+00 4.084896e+00 4.189179e+00 43200 1 - 330 3.925000e+00 4.084896e+00 4.326860e+00 44550 1 - 340 4.500000e+00 4.084893e+00 4.471705e+00 45900 1 - 350 5.000000e+00 4.084891e+00 4.616697e+00 47250 1 - 360 3.075000e+00 4.084866e+00 4.760912e+00 48600 1 - 370 3.500000e+00 4.084861e+00 4.914282e+00 49950 1 - 380 3.356250e+00 4.084857e+00 5.065943e+00 51300 1 - 390 5.500000e+00 4.084846e+00 5.224891e+00 52650 1 - 400 4.475000e+00 4.084846e+00 5.375746e+00 54000 1 - 410 3.750000e+00 4.084843e+00 5.528786e+00 55350 1 - 420 3.687500e+00 4.084843e+00 5.685172e+00 56700 1 - 430 4.337500e+00 4.084825e+00 5.842672e+00 58050 1 - 440 5.750000e+00 4.084825e+00 5.985536e+00 59400 1 - 450 4.925000e+00 4.084792e+00 6.144700e+00 60750 1 - 460 3.600000e+00 4.084792e+00 6.301010e+00 62100 1 - 470 4.387500e+00 4.084792e+00 6.451552e+00 63450 1 - 480 4.000000e+00 4.084792e+00 6.612888e+00 64800 1 - 490 2.975000e+00 4.084788e+00 6.766935e+00 66150 1 - 500 3.125000e+00 4.084788e+00 6.926830e+00 67500 1 - 510 4.250000e+00 4.084788e+00 7.090429e+00 68850 1 - 520 4.512500e+00 4.084786e+00 7.242002e+00 70200 1 - 530 3.875000e+00 4.084786e+00 7.432644e+00 71550 1 - 540 4.387500e+00 4.084781e+00 7.593983e+00 72900 1 - 550 5.281250e+00 4.084780e+00 7.758549e+00 74250 1 - 560 4.650000e+00 4.084780e+00 7.910437e+00 75600 1 - 570 3.062500e+00 4.084780e+00 8.066783e+00 76950 1 - 580 3.187500e+00 4.084780e+00 8.217201e+00 78300 1 - 590 3.812500e+00 4.084780e+00 8.365221e+00 79650 1 - 600 3.637500e+00 4.084774e+00 8.526644e+00 81000 1 - 610 3.950000e+00 4.084765e+00 8.685438e+00 82350 1 - 620 4.625000e+00 4.084760e+00 8.844647e+00 83700 1 - 630 4.218750e+00 4.084760e+00 9.010699e+00 85050 1 - 640 3.025000e+00 4.084755e+00 9.176248e+00 86400 1 - 650 2.993750e+00 4.084751e+00 9.331018e+00 87750 1 - 660 3.262500e+00 4.084746e+00 9.488959e+00 89100 1 - 670 3.625000e+00 4.084746e+00 9.650297e+00 90450 1 - 680 2.981250e+00 4.084746e+00 9.813815e+00 91800 1 - 690 4.187500e+00 4.084746e+00 9.973102e+00 93150 1 - 700 4.500000e+00 4.084746e+00 1.013052e+01 94500 1 - 710 3.225000e+00 4.084746e+00 1.031526e+01 95850 1 - 720 4.375000e+00 4.084746e+00 1.047825e+01 97200 1 - 730 2.650000e+00 4.084746e+00 1.064420e+01 98550 1 - 740 3.250000e+00 4.084746e+00 1.080394e+01 99900 1 - 750 4.725000e+00 4.084746e+00 1.098249e+01 101250 1 - 760 3.375000e+00 4.084746e+00 1.115689e+01 102600 1 - 770 5.375000e+00 4.084746e+00 1.132852e+01 103950 1 - 780 4.068750e+00 4.084746e+00 1.150713e+01 105300 1 - 790 4.412500e+00 4.084746e+00 1.168515e+01 106650 1 - 800 4.350000e+00 4.084746e+00 1.185926e+01 108000 1 - 810 5.887500e+00 4.084746e+00 1.203388e+01 109350 1 - 820 4.912500e+00 4.084746e+00 1.220431e+01 110700 1 - 830 4.387500e+00 4.084746e+00 1.236742e+01 112050 1 - 840 3.675000e+00 4.084746e+00 1.253843e+01 113400 1 - 850 5.375000e+00 4.084746e+00 1.270327e+01 114750 1 - 860 3.562500e+00 4.084746e+00 1.287816e+01 116100 1 - 870 3.075000e+00 4.084746e+00 1.305300e+01 117450 1 - 880 3.625000e+00 4.084746e+00 1.324599e+01 118800 1 - 890 2.937500e+00 4.084746e+00 1.341205e+01 120150 1 - 900 4.450000e+00 4.084746e+00 1.358534e+01 121500 1 - 910 4.200000e+00 4.084746e+00 1.375676e+01 122850 1 - 920 3.687500e+00 4.084746e+00 1.393507e+01 124200 1 - 930 4.725000e+00 4.084746e+00 1.411157e+01 125550 1 - 940 4.018750e+00 4.084746e+00 1.428147e+01 126900 1 - 950 4.675000e+00 4.084746e+00 1.444809e+01 128250 1 - 960 3.375000e+00 4.084746e+00 1.461212e+01 129600 1 - 970 3.812500e+00 4.084746e+00 1.477471e+01 130950 1 - 980 3.112500e+00 4.084746e+00 1.494355e+01 132300 1 - 990 3.600000e+00 4.084746e+00 1.511399e+01 133650 1 - 1000 5.500000e+00 4.084746e+00 1.529316e+01 135000 1 - 1010 3.187500e+00 4.084746e+00 1.546689e+01 136350 1 - 1020 4.900000e+00 4.084746e+00 1.565817e+01 137700 1 - 1030 3.637500e+00 4.084746e+00 1.584541e+01 139050 1 - 1040 3.975000e+00 4.084746e+00 1.602294e+01 140400 1 - 1050 4.750000e+00 4.084746e+00 1.620150e+01 141750 1 - 1060 4.437500e+00 4.084746e+00 1.639548e+01 143100 1 - 1070 5.000000e+00 4.084746e+00 1.657688e+01 144450 1 - 1080 4.143750e+00 4.084746e+00 1.676057e+01 145800 1 - 1090 5.625000e+00 4.084746e+00 1.693456e+01 147150 1 - 1100 3.475000e+00 4.084746e+00 1.711473e+01 148500 1 - 1110 4.156250e+00 4.084746e+00 1.730374e+01 149850 1 - 1120 4.450000e+00 4.084746e+00 1.748936e+01 151200 1 - 1130 3.312500e+00 4.084741e+00 1.767268e+01 152550 1 - 1140 5.375000e+00 4.084741e+00 1.784687e+01 153900 1 - 1150 4.800000e+00 4.084737e+00 1.805750e+01 155250 1 - 1160 3.300000e+00 4.084737e+00 1.824013e+01 156600 1 - 1170 4.356250e+00 4.084737e+00 1.842075e+01 157950 1 - 1180 3.900000e+00 4.084737e+00 1.860576e+01 159300 1 - 1190 4.450000e+00 4.084737e+00 1.879230e+01 160650 1 - 1200 5.156250e+00 4.084737e+00 1.897893e+01 162000 1 - 1210 4.500000e+00 4.084737e+00 1.915242e+01 163350 1 - 1220 4.875000e+00 4.084737e+00 1.935177e+01 164700 1 - 1230 4.000000e+00 4.084737e+00 1.953384e+01 166050 1 - 1240 4.062500e+00 4.084737e+00 1.972081e+01 167400 1 - 1250 5.450000e+00 4.084737e+00 1.991335e+01 168750 1 - 1255 3.693750e+00 4.084737e+00 2.002731e+01 169425 1 + 10 5.250000e+00 4.888859e+00 1.705430e-01 1350 1 + 20 4.350000e+00 4.105855e+00 2.555871e-01 2700 1 + 30 5.000000e+00 4.100490e+00 3.504701e-01 4050 1 + 40 3.500000e+00 4.097376e+00 4.541450e-01 5400 1 + 50 5.250000e+00 4.095859e+00 5.634019e-01 6750 1 + 60 3.643750e+00 4.093342e+00 6.772101e-01 8100 1 + 70 2.643750e+00 4.091818e+00 7.898800e-01 9450 1 + 80 5.087500e+00 4.091591e+00 9.059670e-01 10800 1 + 90 5.062500e+00 4.091309e+00 1.022321e+00 12150 1 + 100 4.843750e+00 4.087004e+00 1.147200e+00 13500 1 + 110 3.437500e+00 4.086094e+00 1.273122e+00 14850 1 + 120 3.375000e+00 4.085926e+00 1.401038e+00 16200 1 + 130 5.025000e+00 4.085866e+00 1.528921e+00 17550 1 + 140 5.000000e+00 4.085734e+00 1.657126e+00 18900 1 + 150 3.500000e+00 4.085655e+00 1.786854e+00 20250 1 + 160 4.281250e+00 4.085454e+00 1.919327e+00 21600 1 + 170 4.562500e+00 4.085425e+00 2.049539e+00 22950 1 + 180 5.768750e+00 4.085425e+00 2.179516e+00 24300 1 + 190 3.468750e+00 4.085359e+00 2.315713e+00 25650 1 + 200 4.131250e+00 4.085225e+00 2.450225e+00 27000 1 + 210 4.512500e+00 4.085157e+00 2.620751e+00 28350 1 + 220 4.900000e+00 4.085153e+00 2.755297e+00 29700 1 + 230 4.025000e+00 4.085134e+00 2.892293e+00 31050 1 + 240 4.468750e+00 4.085116e+00 3.035621e+00 32400 1 + 250 4.062500e+00 4.085075e+00 3.175253e+00 33750 1 + 260 4.875000e+00 4.085037e+00 3.317470e+00 35100 1 + 270 3.850000e+00 4.085011e+00 3.459588e+00 36450 1 + 280 4.912500e+00 4.084992e+00 3.602599e+00 37800 1 + 290 2.987500e+00 4.084986e+00 3.751416e+00 39150 1 + 300 3.825000e+00 4.084957e+00 3.901049e+00 40500 1 + 310 3.250000e+00 4.084911e+00 4.051168e+00 41850 1 + 320 3.600000e+00 4.084896e+00 4.199420e+00 43200 1 + 330 3.925000e+00 4.084896e+00 4.338111e+00 44550 1 + 340 4.500000e+00 4.084893e+00 4.485896e+00 45900 1 + 350 5.000000e+00 4.084891e+00 4.637206e+00 47250 1 + 360 3.075000e+00 4.084866e+00 4.782855e+00 48600 1 + 370 3.500000e+00 4.084861e+00 4.940265e+00 49950 1 + 380 3.356250e+00 4.084857e+00 5.100719e+00 51300 1 + 390 5.500000e+00 4.084846e+00 5.264359e+00 52650 1 + 400 4.475000e+00 4.084846e+00 5.414810e+00 54000 1 + 410 3.750000e+00 4.084843e+00 5.566031e+00 55350 1 + 420 3.687500e+00 4.084843e+00 5.723553e+00 56700 1 + 430 4.337500e+00 4.084825e+00 5.882115e+00 58050 1 + 440 5.750000e+00 4.084825e+00 6.031330e+00 59400 1 + 450 4.925000e+00 4.084792e+00 6.232176e+00 60750 1 + 460 3.600000e+00 4.084792e+00 6.388376e+00 62100 1 + 470 4.387500e+00 4.084792e+00 6.539225e+00 63450 1 + 480 4.000000e+00 4.084792e+00 6.701254e+00 64800 1 + 490 2.975000e+00 4.084788e+00 6.855894e+00 66150 1 + 500 3.125000e+00 4.084788e+00 7.010362e+00 67500 1 + 510 4.250000e+00 4.084788e+00 7.175062e+00 68850 1 + 520 4.512500e+00 4.084786e+00 7.327173e+00 70200 1 + 530 3.875000e+00 4.084786e+00 7.490970e+00 71550 1 + 540 4.387500e+00 4.084781e+00 7.651197e+00 72900 1 + 550 5.281250e+00 4.084780e+00 7.814944e+00 74250 1 + 560 4.650000e+00 4.084780e+00 7.966079e+00 75600 1 + 570 3.062500e+00 4.084780e+00 8.121943e+00 76950 1 + 580 3.187500e+00 4.084780e+00 8.274240e+00 78300 1 + 590 3.812500e+00 4.084780e+00 8.426245e+00 79650 1 + 600 3.637500e+00 4.084774e+00 8.585113e+00 81000 1 + 610 3.950000e+00 4.084765e+00 8.743541e+00 82350 1 + 620 4.625000e+00 4.084760e+00 8.899034e+00 83700 1 + 630 4.218750e+00 4.084760e+00 9.059865e+00 85050 1 + 640 3.025000e+00 4.084755e+00 9.229526e+00 86400 1 + 650 2.993750e+00 4.084751e+00 9.381509e+00 87750 1 + 660 3.262500e+00 4.084746e+00 9.537901e+00 89100 1 + 670 3.625000e+00 4.084746e+00 9.698592e+00 90450 1 + 680 2.981250e+00 4.084746e+00 9.886476e+00 91800 1 + 690 4.187500e+00 4.084746e+00 1.004398e+01 93150 1 + 700 4.500000e+00 4.084746e+00 1.020858e+01 94500 1 + 710 3.225000e+00 4.084746e+00 1.036861e+01 95850 1 + 720 4.375000e+00 4.084746e+00 1.053133e+01 97200 1 + 730 2.650000e+00 4.084746e+00 1.070257e+01 98550 1 + 740 3.250000e+00 4.084746e+00 1.086710e+01 99900 1 + 750 4.725000e+00 4.084746e+00 1.104115e+01 101250 1 + 760 3.375000e+00 4.084746e+00 1.122765e+01 102600 1 + 770 5.375000e+00 4.084746e+00 1.139841e+01 103950 1 + 780 4.068750e+00 4.084746e+00 1.157084e+01 105300 1 + 790 4.412500e+00 4.084746e+00 1.174784e+01 106650 1 + 800 4.350000e+00 4.084746e+00 1.192131e+01 108000 1 + 810 5.887500e+00 4.084746e+00 1.209914e+01 109350 1 + 820 4.912500e+00 4.084746e+00 1.227945e+01 110700 1 + 830 4.387500e+00 4.084746e+00 1.244912e+01 112050 1 + 840 3.675000e+00 4.084746e+00 1.262362e+01 113400 1 + 850 5.375000e+00 4.084746e+00 1.279167e+01 114750 1 + 860 3.562500e+00 4.084746e+00 1.296710e+01 116100 1 + 870 3.075000e+00 4.084746e+00 1.317081e+01 117450 1 + 880 3.625000e+00 4.084746e+00 1.334154e+01 118800 1 + 890 2.937500e+00 4.084746e+00 1.350708e+01 120150 1 + 900 4.450000e+00 4.084746e+00 1.368155e+01 121500 1 + 910 4.200000e+00 4.084746e+00 1.385327e+01 122850 1 + 920 3.687500e+00 4.084746e+00 1.403149e+01 124200 1 + 930 4.725000e+00 4.084746e+00 1.420675e+01 125550 1 + 940 4.018750e+00 4.084746e+00 1.438224e+01 126900 1 + 950 4.675000e+00 4.084746e+00 1.454988e+01 128250 1 + 960 3.375000e+00 4.084746e+00 1.471849e+01 129600 1 + 970 3.812500e+00 4.084746e+00 1.488782e+01 130950 1 + 980 3.112500e+00 4.084746e+00 1.506232e+01 132300 1 + 990 3.600000e+00 4.084746e+00 1.523514e+01 133650 1 + 1000 5.500000e+00 4.084746e+00 1.541459e+01 135000 1 + 1010 3.187500e+00 4.084746e+00 1.558135e+01 136350 1 + 1020 4.900000e+00 4.084746e+00 1.577494e+01 137700 1 + 1030 3.637500e+00 4.084746e+00 1.596098e+01 139050 1 + 1040 3.975000e+00 4.084746e+00 1.613504e+01 140400 1 + 1050 4.750000e+00 4.084746e+00 1.631285e+01 141750 1 + 1060 4.437500e+00 4.084746e+00 1.650971e+01 143100 1 + 1070 5.000000e+00 4.084746e+00 1.669296e+01 144450 1 + 1080 4.143750e+00 4.084746e+00 1.687549e+01 145800 1 + 1090 5.625000e+00 4.084746e+00 1.705005e+01 147150 1 + 1100 3.475000e+00 4.084746e+00 1.723784e+01 148500 1 + 1110 4.156250e+00 4.084746e+00 1.743519e+01 149850 1 + 1120 4.450000e+00 4.084746e+00 1.762047e+01 151200 1 + 1130 3.312500e+00 4.084741e+00 1.780914e+01 152550 1 + 1140 5.375000e+00 4.084741e+00 1.798710e+01 153900 1 + 1150 4.800000e+00 4.084737e+00 1.817912e+01 155250 1 + 1160 3.300000e+00 4.084737e+00 1.838765e+01 156600 1 + 1170 4.356250e+00 4.084737e+00 1.857317e+01 157950 1 + 1180 3.900000e+00 4.084737e+00 1.877175e+01 159300 1 + 1190 4.450000e+00 4.084737e+00 1.896955e+01 160650 1 + 1200 5.156250e+00 4.084737e+00 1.916040e+01 162000 1 + 1210 4.500000e+00 4.084737e+00 1.933501e+01 163350 1 + 1220 4.875000e+00 4.084737e+00 1.953455e+01 164700 1 + 1230 4.000000e+00 4.084737e+00 1.971705e+01 166050 1 + 1240 4.062500e+00 4.084737e+00 1.990034e+01 167400 1 + 1246 3.000000e+00 4.084737e+00 2.001316e+01 168210 1 ------------------------------------------------------------------- status : time_limit -total time (s) : 2.002731e+01 -total solves : 169425 +total time (s) : 2.001316e+01 +total solves : 168210 best bound : 4.084737e+00 -simulation ci : 4.071739e+00 ± 4.036551e-02 +simulation ci : 4.071445e+00 ± 4.036229e-02 numeric issues : 0 ------------------------------------------------------------------- @@ -1299,29 +1298,29 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 5.237500e+00 4.355124e+00 2.079720e-01 1350 1 - 20 3.162500e+00 4.048915e+00 5.789781e-01 2700 1 - 30 4.125000e+00 4.043948e+00 1.084584e+00 4050 1 - 40 2.975000e+00 4.041052e+00 1.732931e+00 5400 1 - 50 4.781250e+00 4.040641e+00 2.441468e+00 6750 1 - 60 5.156250e+00 4.040393e+00 3.348012e+00 8100 1 - 70 2.750000e+00 4.039305e+00 4.342502e+00 9450 1 - 80 4.225000e+00 4.039111e+00 5.449113e+00 10800 1 - 90 2.737500e+00 4.039025e+00 6.629321e+00 12150 1 - 100 4.006250e+00 4.038936e+00 8.631956e+00 13500 1 - 110 4.662500e+00 4.038867e+00 1.004983e+01 14850 1 - 120 4.300000e+00 4.038845e+00 1.156762e+01 16200 1 - 130 4.875000e+00 4.038784e+00 1.325238e+01 17550 1 - 140 3.975000e+00 4.038782e+00 1.500369e+01 18900 1 - 150 3.525000e+00 4.038772e+00 1.684451e+01 20250 1 - 160 4.100000e+00 4.037588e+00 1.884064e+01 21600 1 - 166 4.875000e+00 4.037588e+00 2.001337e+01 22410 1 + 10 4.512500e+00 4.066874e+00 2.024860e-01 1350 1 + 20 5.062500e+00 4.040569e+00 5.491850e-01 2700 1 + 30 4.968750e+00 4.039400e+00 1.065618e+00 4050 1 + 40 4.125000e+00 4.039286e+00 1.716066e+00 5400 1 + 50 3.925000e+00 4.039078e+00 2.594418e+00 6750 1 + 60 3.875000e+00 4.039004e+00 3.512367e+00 8100 1 + 70 3.918750e+00 4.039008e+00 4.632966e+00 9450 1 + 80 3.600000e+00 4.038911e+00 5.784783e+00 10800 1 + 90 4.250000e+00 4.038874e+00 7.099883e+00 12150 1 + 100 5.400000e+00 4.038820e+00 8.481728e+00 13500 1 + 110 3.000000e+00 4.038795e+00 1.000387e+01 14850 1 + 120 3.000000e+00 4.038812e+00 1.159739e+01 16200 1 + 130 2.993750e+00 4.038782e+00 1.328464e+01 17550 1 + 140 4.406250e+00 4.038770e+00 1.515294e+01 18900 1 + 150 5.625000e+00 4.038777e+00 1.708383e+01 20250 1 + 160 3.081250e+00 4.038772e+00 1.906493e+01 21600 1 + 165 5.006250e+00 4.038772e+00 2.015715e+01 22275 1 ------------------------------------------------------------------- status : time_limit -total time (s) : 2.001337e+01 -total solves : 22410 -best bound : 4.037588e+00 -simulation ci : 4.057982e+00 ± 1.153783e-01 +total time (s) : 2.015715e+01 +total solves : 22275 +best bound : 4.038772e+00 +simulation ci : 4.070947e+00 ± 1.188614e-01 numeric issues : 0 ------------------------------------------------------------------- @@ -1352,21 +1351,18 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 3.090635e+00 1.166665e+00 3.902190e-01 1680 1 - 20 2.964782e+00 1.166901e+00 4.868731e-01 2560 1 - 30 3.053110e+00 1.166901e+00 8.868811e-01 4240 1 - 40 3.055726e+00 1.166901e+00 9.855461e-01 5120 1 - 50 2.904107e+00 1.166901e+00 1.386374e+00 6800 1 - 60 2.903935e+00 1.167416e+00 1.491052e+00 7680 1 - 70 3.268068e+00 1.167416e+00 1.896434e+00 9360 1 - 80 3.556081e+00 1.167416e+00 2.002634e+00 10240 1 - 82 3.444568e+00 1.167416e+00 2.023760e+00 10416 1 + 10 3.426289e+00 1.163128e+00 3.805249e-01 1680 1 + 20 2.386729e+00 1.163467e+00 4.746521e-01 2560 1 + 30 3.405925e+00 1.165481e+00 8.518538e-01 4240 1 + 40 3.219206e+00 1.165481e+00 9.531829e-01 5120 1 + 50 3.074686e+00 1.165481e+00 1.339746e+00 6800 1 + 60 3.224080e+00 1.165481e+00 1.440270e+00 7680 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.023760e+00 -total solves : 10416 -best bound : 1.167416e+00 -simulation ci : 3.228310e+00 ± 9.616073e-02 +total time (s) : 1.440270e+00 +total solves : 7680 +best bound : 1.165481e+00 +simulation ci : 3.299213e+00 ± 1.277496e-01 numeric issues : 0 ------------------------------------------------------------------- @@ -1398,16 +1394,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -4.000000e+01 -5.809615e+01 3.294420e-02 78 1 - 20 -9.800000e+01 -5.809615e+01 6.572199e-02 148 1 - 30 -4.000000e+01 -5.809615e+01 1.048801e-01 226 1 - 40 -9.800000e+01 -5.809615e+01 1.386862e-01 296 1 + 10 -4.000000e+01 -5.809615e+01 3.150392e-02 78 1 + 20 -4.000000e+01 -5.809615e+01 6.383395e-02 148 1 + 30 -4.700000e+01 -5.809615e+01 1.036179e-01 226 1 + 40 -4.000000e+01 -5.809615e+01 1.382520e-01 296 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.386862e-01 +total time (s) : 1.382520e-01 total solves : 296 best bound : -5.809615e+01 -simulation ci : -5.086250e+01 ± 6.568136e+00 +simulation ci : -5.188750e+01 ± 7.419070e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -1439,16 +1435,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -9.800000e+01 -6.196125e+01 4.067993e-02 138 1 - 20 -8.200000e+01 -6.196125e+01 7.866907e-02 258 1 - 30 -9.800000e+01 -6.196125e+01 1.297381e-01 396 1 - 40 -8.200000e+01 -6.196125e+01 1.685491e-01 516 1 + 10 -4.700000e+01 -6.196125e+01 4.114795e-02 138 1 + 20 -9.800000e+01 -6.196125e+01 7.786107e-02 258 1 + 30 -7.500000e+01 -6.196125e+01 1.281550e-01 396 1 + 40 -6.300000e+01 -6.196125e+01 1.660991e-01 516 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.685491e-01 +total time (s) : 1.660991e-01 total solves : 516 best bound : -6.196125e+01 -simulation ci : -5.836250e+01 ± 5.879370e+00 +simulation ci : -5.548750e+01 ± 5.312051e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -1480,16 +1476,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -4.000000e+01 -6.546793e+01 7.774401e-02 462 1 - 20 -4.000000e+01 -6.546793e+01 1.385040e-01 852 1 - 30 -5.400000e+01 -6.546793e+01 2.561040e-01 1314 1 - 40 -4.700000e+01 -6.546793e+01 3.174150e-01 1704 1 + 10 -8.200000e+01 -6.546793e+01 7.628012e-02 462 1 + 20 -7.000000e+01 -6.546793e+01 1.390240e-01 852 1 + 30 -6.300000e+01 -6.546793e+01 2.592950e-01 1314 1 + 40 -4.700000e+01 -6.546793e+01 3.213410e-01 1704 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.174150e-01 +total time (s) : 3.213410e-01 total solves : 1704 best bound : -6.546793e+01 -simulation ci : -6.113750e+01 ± 4.795224e+00 +simulation ci : -6.263750e+01 ± 5.346304e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -1520,14 +1516,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 6.000000e+00 1.200000e+01 4.242802e-02 11 1 - 40L 6.000000e+00 8.000000e+00 4.905179e-01 602 1 + 1L 3.000000e+00 1.422222e+01 4.334402e-02 11 1 + 40L 6.000000e+00 8.000000e+00 5.495031e-01 602 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.905179e-01 +total time (s) : 5.495031e-01 total solves : 602 best bound : 8.000000e+00 -simulation ci : 8.250000e+00 ± 9.356503e-01 +simulation ci : 7.125000e+00 ± 7.499254e-01 numeric issues : 0 ------------------------------------------------------------------- @@ -1558,14 +1554,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 -9.800000e+04 4.922260e+05 8.815098e-02 6 1 - 40 1.670000e+05 1.083900e+05 1.176600e-01 240 1 + 1 -9.800000e+04 4.922260e+05 8.765697e-02 6 1 + 40 4.882000e+04 1.083900e+05 1.165640e-01 240 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.176600e-01 +total time (s) : 1.165640e-01 total solves : 240 best bound : 1.083900e+05 -simulation ci : 9.274388e+04 ± 1.962777e+04 +simulation ci : 1.002754e+05 ± 2.174010e+04 numeric issues : 0 ------------------------------------------------------------------- diff --git a/previews/PR810/examples/SDDP_0.0.log b/previews/PR810/examples/SDDP_0.0.log index 2b7d012bf..e10e14b57 100644 --- a/previews/PR810/examples/SDDP_0.0.log +++ b/previews/PR810/examples/SDDP_0.0.log @@ -19,11 +19,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 0.000000e+00 9.856939e-03 36 1 - 10 0.000000e+00 0.000000e+00 2.996993e-02 360 1 + 1 0.000000e+00 0.000000e+00 1.183295e-02 36 1 + 10 0.000000e+00 0.000000e+00 3.175783e-02 360 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.996993e-02 +total time (s) : 3.175783e-02 total solves : 360 best bound : 0.000000e+00 simulation ci : 0.000000e+00 ± 0.000000e+00 diff --git a/previews/PR810/examples/SDDP_0.0625.log b/previews/PR810/examples/SDDP_0.0625.log index e6d84e1bb..a6f445f51 100644 --- a/previews/PR810/examples/SDDP_0.0625.log +++ b/previews/PR810/examples/SDDP_0.0625.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.437500e+01 5.937500e+01 3.119946e-03 3375 1 - 10 3.750000e+01 5.938557e+01 3.180599e-02 3699 1 + 1 3.437500e+01 5.937500e+01 3.304005e-03 3375 1 + 10 3.750000e+01 5.938557e+01 3.113914e-02 3699 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.180599e-02 +total time (s) : 3.113914e-02 total solves : 3699 best bound : 5.938557e+01 simulation ci : 5.906250e+01 ± 1.352595e+01 diff --git a/previews/PR810/examples/SDDP_0.125.log b/previews/PR810/examples/SDDP_0.125.log index 458f83bc5..4ac7c56b8 100644 --- a/previews/PR810/examples/SDDP_0.125.log +++ b/previews/PR810/examples/SDDP_0.125.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.675000e+02 1.129545e+02 3.298998e-03 1891 1 - 10 1.362500e+02 1.129771e+02 3.119779e-02 2215 1 + 1 1.675000e+02 1.129545e+02 2.830982e-03 1891 1 + 10 1.362500e+02 1.129771e+02 3.002191e-02 2215 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.119779e-02 +total time (s) : 3.002191e-02 total solves : 2215 best bound : 1.129771e+02 simulation ci : 1.176375e+02 ± 1.334615e+01 diff --git a/previews/PR810/examples/SDDP_0.25.log b/previews/PR810/examples/SDDP_0.25.log index f9d7bfb08..dd4b02f44 100644 --- a/previews/PR810/examples/SDDP_0.25.log +++ b/previews/PR810/examples/SDDP_0.25.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.887500e+02 1.995243e+02 2.804041e-03 1149 1 - 10 2.962500e+02 2.052855e+02 3.136206e-02 1473 1 + 1 1.887500e+02 1.995243e+02 2.774954e-03 1149 1 + 10 2.962500e+02 2.052855e+02 2.955508e-02 1473 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.136206e-02 +total time (s) : 2.955508e-02 total solves : 1473 best bound : 2.052855e+02 simulation ci : 2.040201e+02 ± 3.876873e+01 diff --git a/previews/PR810/examples/SDDP_0.375.log b/previews/PR810/examples/SDDP_0.375.log index d3032f49f..31d98fc4c 100644 --- a/previews/PR810/examples/SDDP_0.375.log +++ b/previews/PR810/examples/SDDP_0.375.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.562500e+02 2.788373e+02 3.252983e-03 2262 1 - 10 2.375000e+02 2.795671e+02 3.367996e-02 2586 1 + 1 2.562500e+02 2.788373e+02 3.242016e-03 2262 1 + 10 2.375000e+02 2.795671e+02 3.334594e-02 2586 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.367996e-02 +total time (s) : 3.334594e-02 total solves : 2586 best bound : 2.795671e+02 simulation ci : 2.375000e+02 ± 3.099032e+01 diff --git a/previews/PR810/examples/SDDP_0.5.log b/previews/PR810/examples/SDDP_0.5.log index c22da1e56..9a673fd91 100644 --- a/previews/PR810/examples/SDDP_0.5.log +++ b/previews/PR810/examples/SDDP_0.5.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 4.850000e+02 3.349793e+02 3.589153e-03 778 1 - 10 3.550000e+02 3.468286e+02 3.234601e-02 1102 1 + 1 4.850000e+02 3.349793e+02 3.065825e-03 778 1 + 10 3.550000e+02 3.468286e+02 3.073287e-02 1102 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.234601e-02 +total time (s) : 3.073287e-02 total solves : 1102 best bound : 3.468286e+02 simulation ci : 3.948309e+02 ± 7.954180e+01 diff --git a/previews/PR810/examples/SDDP_0.625.log b/previews/PR810/examples/SDDP_0.625.log index 987a446d5..da405148f 100644 --- a/previews/PR810/examples/SDDP_0.625.log +++ b/previews/PR810/examples/SDDP_0.625.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.812500e+02 4.072952e+02 3.753901e-03 2633 1 - 10 5.818750e+02 4.080500e+02 3.627491e-02 2957 1 + 1 3.812500e+02 4.072952e+02 3.582001e-03 2633 1 + 10 5.818750e+02 4.080500e+02 3.523493e-02 2957 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.627491e-02 +total time (s) : 3.523493e-02 total solves : 2957 best bound : 4.080500e+02 simulation ci : 4.235323e+02 ± 1.029245e+02 diff --git a/previews/PR810/examples/SDDP_0.75.log b/previews/PR810/examples/SDDP_0.75.log index 262034fef..bb10740e2 100644 --- a/previews/PR810/examples/SDDP_0.75.log +++ b/previews/PR810/examples/SDDP_0.75.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.737500e+02 4.626061e+02 3.409863e-03 1520 1 - 10 2.450000e+02 4.658509e+02 3.441906e-02 1844 1 + 1 3.737500e+02 4.626061e+02 3.571033e-03 1520 1 + 10 2.450000e+02 4.658509e+02 3.385997e-02 1844 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.441906e-02 +total time (s) : 3.385997e-02 total solves : 1844 best bound : 4.658509e+02 simulation ci : 3.907376e+02 ± 9.045105e+01 diff --git a/previews/PR810/examples/SDDP_0.875.log b/previews/PR810/examples/SDDP_0.875.log index 3c7e3b2a8..3372fb3c8 100644 --- a/previews/PR810/examples/SDDP_0.875.log +++ b/previews/PR810/examples/SDDP_0.875.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 8.525000e+02 5.197742e+02 3.638983e-03 3004 1 - 10 4.493750e+02 5.211793e+02 3.738189e-02 3328 1 + 1 8.525000e+02 5.197742e+02 3.494978e-03 3004 1 + 10 4.493750e+02 5.211793e+02 3.625989e-02 3328 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.738189e-02 +total time (s) : 3.625989e-02 total solves : 3328 best bound : 5.211793e+02 simulation ci : 5.268125e+02 ± 1.227709e+02 diff --git a/previews/PR810/examples/SDDP_1.0.log b/previews/PR810/examples/SDDP_1.0.log index 6a54934d8..42fadcf39 100644 --- a/previews/PR810/examples/SDDP_1.0.log +++ b/previews/PR810/examples/SDDP_1.0.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.750000e+02 5.500000e+02 3.160954e-03 407 1 - 10 4.500000e+02 5.733959e+02 3.031683e-02 731 1 + 1 6.750000e+02 5.500000e+02 2.875090e-03 407 1 + 10 4.500000e+02 5.733959e+02 2.975512e-02 731 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.031683e-02 +total time (s) : 2.975512e-02 total solves : 731 best bound : 5.733959e+02 simulation ci : 5.000000e+02 ± 1.079583e+02 diff --git a/previews/PR810/examples/StochDynamicProgramming.jl_multistock/index.html b/previews/PR810/examples/StochDynamicProgramming.jl_multistock/index.html index a894a6818..0f8fb2b92 100644 --- a/previews/PR810/examples/StochDynamicProgramming.jl_multistock/index.html +++ b/previews/PR810/examples/StochDynamicProgramming.jl_multistock/index.html @@ -80,21 +80,21 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -3.878303e+00 -4.434982e+00 1.972919e-01 1400 1 - 20 -4.262885e+00 -4.399265e+00 3.149569e-01 2800 1 - 30 -3.075162e+00 -4.382527e+00 4.390740e-01 4200 1 - 40 -3.761147e+00 -4.369587e+00 5.740230e-01 5600 1 - 50 -4.323162e+00 -4.362199e+00 7.140150e-01 7000 1 - 60 -3.654943e+00 -4.358401e+00 8.555930e-01 8400 1 - 70 -4.010883e+00 -4.357368e+00 9.986451e-01 9800 1 - 80 -4.314412e+00 -4.355714e+00 1.145507e+00 11200 1 - 90 -4.542422e+00 -4.353708e+00 1.298164e+00 12600 1 - 100 -4.178952e+00 -4.351685e+00 1.446283e+00 14000 1 + 10 -3.878303e+00 -4.434982e+00 1.981149e-01 1400 1 + 20 -4.262885e+00 -4.399265e+00 3.170040e-01 2800 1 + 30 -3.075162e+00 -4.382527e+00 4.417539e-01 4200 1 + 40 -3.761147e+00 -4.369587e+00 5.741251e-01 5600 1 + 50 -4.323162e+00 -4.362199e+00 7.105660e-01 7000 1 + 60 -3.654943e+00 -4.358401e+00 8.499410e-01 8400 1 + 70 -4.010883e+00 -4.357368e+00 9.909499e-01 9800 1 + 80 -4.314412e+00 -4.355714e+00 1.136921e+00 11200 1 + 90 -4.542422e+00 -4.353708e+00 1.287445e+00 12600 1 + 100 -4.178952e+00 -4.351685e+00 1.432129e+00 14000 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.446283e+00 +total time (s) : 1.432129e+00 total solves : 14000 best bound : -4.351685e+00 simulation ci : -4.246786e+00 ± 8.703997e-02 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/StochDynamicProgramming.jl_stock/index.html b/previews/PR810/examples/StochDynamicProgramming.jl_stock/index.html index 6cee06f3b..6e01b88b2 100644 --- a/previews/PR810/examples/StochDynamicProgramming.jl_stock/index.html +++ b/previews/PR810/examples/StochDynamicProgramming.jl_stock/index.html @@ -57,18 +57,18 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -1.573154e+00 -1.474247e+00 6.858993e-02 1050 1 - 20 -1.346690e+00 -1.471483e+00 1.077039e-01 1600 1 - 30 -1.308031e+00 -1.471307e+00 1.912889e-01 2650 1 - 40 -1.401200e+00 -1.471167e+00 2.330921e-01 3200 1 - 50 -1.557483e+00 -1.471097e+00 3.204689e-01 4250 1 - 60 -1.534169e+00 -1.471075e+00 3.659289e-01 4800 1 - 65 -1.689864e+00 -1.471075e+00 3.889520e-01 5075 1 + 10 -1.573154e+00 -1.474247e+00 6.874084e-02 1050 1 + 20 -1.346690e+00 -1.471483e+00 1.070879e-01 1600 1 + 30 -1.308031e+00 -1.471307e+00 1.898260e-01 2650 1 + 40 -1.401200e+00 -1.471167e+00 2.307410e-01 3200 1 + 50 -1.557483e+00 -1.471097e+00 3.172069e-01 4250 1 + 60 -1.534169e+00 -1.471075e+00 3.619289e-01 4800 1 + 65 -1.689864e+00 -1.471075e+00 3.849900e-01 5075 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.889520e-01 +total time (s) : 3.849900e-01 total solves : 5075 best bound : -1.471075e+00 simulation ci : -1.484094e+00 ± 4.058993e-02 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/StructDualDynProg.jl_prob5.2_2stages/index.html b/previews/PR810/examples/StructDualDynProg.jl_prob5.2_2stages/index.html index 99b86436a..71e553680 100644 --- a/previews/PR810/examples/StructDualDynProg.jl_prob5.2_2stages/index.html +++ b/previews/PR810/examples/StructDualDynProg.jl_prob5.2_2stages/index.html @@ -85,16 +85,16 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 3.455904e+05 3.147347e+05 8.077145e-03 54 1 - 20 3.336455e+05 3.402383e+05 1.420999e-02 104 1 - 30 3.337559e+05 3.403155e+05 2.143812e-02 158 1 - 40 3.337559e+05 3.403155e+05 2.846503e-02 208 1 - 48 3.337559e+05 3.403155e+05 3.461409e-02 248 1 + 10 3.455904e+05 3.147347e+05 8.325100e-03 54 1 + 20 3.336455e+05 3.402383e+05 1.460004e-02 104 1 + 30 3.337559e+05 3.403155e+05 2.192307e-02 158 1 + 40 3.337559e+05 3.403155e+05 2.894115e-02 208 1 + 48 3.337559e+05 3.403155e+05 3.505611e-02 248 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.461409e-02 +total time (s) : 3.505611e-02 total solves : 248 best bound : 3.403155e+05 simulation ci : 1.351676e+08 ± 1.785770e+08 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/StructDualDynProg.jl_prob5.2_3stages/index.html b/previews/PR810/examples/StructDualDynProg.jl_prob5.2_3stages/index.html index 9bbe94879..fb3720ef4 100644 --- a/previews/PR810/examples/StructDualDynProg.jl_prob5.2_3stages/index.html +++ b/previews/PR810/examples/StructDualDynProg.jl_prob5.2_3stages/index.html @@ -81,16 +81,16 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.403329e+05 3.509666e+05 1.348710e-02 92 1 - 20 4.055335e+05 4.054833e+05 2.436900e-02 172 1 - 30 3.959476e+05 4.067125e+05 3.793120e-02 264 1 - 40 3.959476e+05 4.067125e+05 5.155921e-02 344 1 - 47 3.959476e+05 4.067125e+05 6.194115e-02 400 1 + 10 4.403329e+05 3.509666e+05 1.380706e-02 92 1 + 20 4.055335e+05 4.054833e+05 2.451611e-02 172 1 + 30 3.959476e+05 4.067125e+05 3.781104e-02 264 1 + 40 3.959476e+05 4.067125e+05 5.110097e-02 344 1 + 47 3.959476e+05 4.067125e+05 6.131101e-02 400 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.194115e-02 +total time (s) : 6.131101e-02 total solves : 400 best bound : 4.067125e+05 simulation ci : 2.695623e+07 ± 3.645336e+07 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/agriculture_mccardle_farm/index.html b/previews/PR810/examples/agriculture_mccardle_farm/index.html index b272e3593..b8c21f87a 100644 --- a/previews/PR810/examples/agriculture_mccardle_farm/index.html +++ b/previews/PR810/examples/agriculture_mccardle_farm/index.html @@ -124,4 +124,4 @@ @test SDDP.calculate_bound(model) ≈ 4074.1391 atol = 1e-5 end -test_mccardle_farm_model()
Test Passed
+test_mccardle_farm_model()
Test Passed
diff --git a/previews/PR810/examples/air_conditioning/index.html b/previews/PR810/examples/air_conditioning/index.html index 27ad80b69..a21b6b950 100644 --- a/previews/PR810/examples/air_conditioning/index.html +++ b/previews/PR810/examples/air_conditioning/index.html @@ -76,11 +76,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 7.000000e+04 6.166667e+04 5.692489e-01 8 1 - 40L 5.500000e+04 6.250000e+04 8.180799e-01 344 1 + 1L 7.000000e+04 6.166667e+04 5.793221e-01 8 1 + 40L 5.500000e+04 6.250000e+04 8.212459e-01 344 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 8.180799e-01 +total time (s) : 8.212459e-01 total solves : 344 best bound : 6.250000e+04 simulation ci : 6.091250e+04 ± 6.325667e+03 @@ -115,11 +115,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.000000e+04 6.250000e+04 3.791094e-03 8 1 - 20 6.000000e+04 6.250000e+04 4.381514e-02 172 1 + 1 3.000000e+04 6.250000e+04 3.978014e-03 8 1 + 20 6.000000e+04 6.250000e+04 4.401493e-02 172 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.381514e-02 +total time (s) : 4.401493e-02 total solves : 172 best bound : 6.250000e+04 simulation ci : 5.675000e+04 ± 6.792430e+03 @@ -127,4 +127,4 @@ ------------------------------------------------------------------- Lower bound is: 62500.0 -With first stage solutions 200.0 (production) and 100.0 (stored_production). +With first stage solutions 200.0 (production) and 100.0 (stored_production). diff --git a/previews/PR810/examples/air_conditioning_forward/index.html b/previews/PR810/examples/air_conditioning_forward/index.html index 70bfd0806..a7acbecea 100644 --- a/previews/PR810/examples/air_conditioning_forward/index.html +++ b/previews/PR810/examples/air_conditioning_forward/index.html @@ -37,4 +37,4 @@ iteration_limit = 10, ) Test.@test isapprox(SDDP.calculate_bound(non_convex), 62_500.0, atol = 0.1) -Test.@test isapprox(SDDP.calculate_bound(convex), 62_500.0, atol = 0.1)
Test Passed
+Test.@test isapprox(SDDP.calculate_bound(convex), 62_500.0, atol = 0.1)
Test Passed
diff --git a/previews/PR810/examples/all_blacks/index.html b/previews/PR810/examples/all_blacks/index.html index 43b8ccd49..dc5774f46 100644 --- a/previews/PR810/examples/all_blacks/index.html +++ b/previews/PR810/examples/all_blacks/index.html @@ -61,13 +61,13 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 6.000000e+00 9.000000e+00 4.085207e-02 6 1 - 20L 9.000000e+00 9.000000e+00 8.351707e-02 123 1 + 1L 6.000000e+00 9.000000e+00 4.043484e-02 6 1 + 20L 9.000000e+00 9.000000e+00 8.175492e-02 123 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 8.351707e-02 +total time (s) : 8.175492e-02 total solves : 123 best bound : 9.000000e+00 simulation ci : 8.850000e+00 ± 2.940000e-01 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/asset_management_simple/index.html b/previews/PR810/examples/asset_management_simple/index.html index 1ffe3a185..c95ab9ab3 100644 --- a/previews/PR810/examples/asset_management_simple/index.html +++ b/previews/PR810/examples/asset_management_simple/index.html @@ -74,19 +74,19 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 -5.684342e-14 1.184830e+00 1.325107e-02 87 1 - 10 5.012507e+01 1.508277e+00 1.979804e-02 142 1 - 15 -1.428571e+00 1.514085e+00 2.706099e-02 197 1 - 20 7.105427e-14 1.514085e+00 3.486300e-02 252 1 - 25 -3.979039e-13 1.514085e+00 9.059000e-02 339 1 - 30 -1.428571e+00 1.514085e+00 9.916711e-02 394 1 - 35 -1.428571e+00 1.514085e+00 1.083140e-01 449 1 - 40 0.000000e+00 1.514085e+00 1.179910e-01 504 1 + 5 -5.684342e-14 1.184830e+00 1.354599e-02 87 1 + 10 5.012507e+01 1.508277e+00 2.029800e-02 142 1 + 15 -1.428571e+00 1.514085e+00 2.757788e-02 197 1 + 20 7.105427e-14 1.514085e+00 3.523993e-02 252 1 + 25 -3.979039e-13 1.514085e+00 9.073496e-02 339 1 + 30 -1.428571e+00 1.514085e+00 9.931684e-02 394 1 + 35 -1.428571e+00 1.514085e+00 1.083419e-01 449 1 + 40 0.000000e+00 1.514085e+00 1.178699e-01 504 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.179910e-01 +total time (s) : 1.178699e-01 total solves : 504 best bound : 1.514085e+00 simulation ci : 2.863132e+00 ± 6.778637e+00 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/asset_management_stagewise/index.html b/previews/PR810/examples/asset_management_stagewise/index.html index 7fb128211..a721d9eec 100644 --- a/previews/PR810/examples/asset_management_stagewise/index.html +++ b/previews/PR810/examples/asset_management_stagewise/index.html @@ -91,14 +91,14 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 1.100409e+00 1.301856e+00 1.567600e-01 278 1 - 20 1.263098e+01 1.278410e+00 1.765418e-01 428 1 - 30 -5.003795e+01 1.278410e+00 2.089930e-01 706 1 - 40 6.740000e+00 1.278410e+00 2.322638e-01 856 1 - 44 1.111084e+01 1.278410e+00 2.419748e-01 916 1 + 10 1.100409e+00 1.301856e+00 1.553259e-01 278 1 + 20 1.263098e+01 1.278410e+00 1.751239e-01 428 1 + 30 -5.003795e+01 1.278410e+00 2.076471e-01 706 1 + 40 6.740000e+00 1.278410e+00 2.307570e-01 856 1 + 44 1.111084e+01 1.278410e+00 2.404149e-01 916 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.419748e-01 +total time (s) : 2.404149e-01 total solves : 916 best bound : 1.278410e+00 simulation ci : 4.090025e+00 ± 5.358375e+00 @@ -130,15 +130,15 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 2.007061e+00 1.281639e+00 3.648210e-02 278 1 - 20 1.426676e+01 1.278410e+00 6.433511e-02 428 1 - 30 1.522212e+00 1.278410e+00 1.086230e-01 706 1 - 40 -4.523775e+01 1.278410e+00 1.463411e-01 856 1 + 10 2.007061e+00 1.281639e+00 3.743601e-02 278 1 + 20 1.426676e+01 1.278410e+00 6.516910e-02 428 1 + 30 1.522212e+00 1.278410e+00 1.093900e-01 706 1 + 40 -4.523775e+01 1.278410e+00 1.465080e-01 856 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.463411e-01 +total time (s) : 1.465080e-01 total solves : 856 best bound : 1.278410e+00 simulation ci : 1.019480e+00 ± 6.246418e+00 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/belief/index.html b/previews/PR810/examples/belief/index.html index 1d9beaf8d..7a4d0acf2 100644 --- a/previews/PR810/examples/belief/index.html +++ b/previews/PR810/examples/belief/index.html @@ -94,21 +94,21 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.787277e+00 9.346930e+00 1.412701e+00 900 1 - 20 6.374753e+00 1.361934e+01 1.581513e+00 1720 1 - 30 2.813321e+01 1.651297e+01 1.908548e+00 3036 1 - 40 1.654759e+01 1.632970e+01 2.273542e+00 4192 1 - 50 3.570941e+00 1.846889e+01 2.540469e+00 5020 1 - 60 1.087425e+01 1.890254e+01 2.825104e+00 5808 1 - 70 9.381610e+00 1.940320e+01 3.118844e+00 6540 1 - 80 5.648731e+01 1.962435e+01 3.339116e+00 7088 1 - 90 3.879273e+01 1.981008e+01 3.830556e+00 8180 1 - 100 7.870187e+00 1.997117e+01 4.071501e+00 8664 1 + 10 4.787277e+00 9.346930e+00 1.441149e+00 900 1 + 20 6.374753e+00 1.361934e+01 1.609550e+00 1720 1 + 30 2.813321e+01 1.651297e+01 1.939074e+00 3036 1 + 40 1.654759e+01 1.632970e+01 2.302915e+00 4192 1 + 50 3.570941e+00 1.846889e+01 2.570651e+00 5020 1 + 60 1.087425e+01 1.890254e+01 2.844325e+00 5808 1 + 70 9.381610e+00 1.940320e+01 3.135714e+00 6540 1 + 80 5.648731e+01 1.962435e+01 3.353945e+00 7088 1 + 90 3.879273e+01 1.981008e+01 3.830138e+00 8180 1 + 100 7.870187e+00 1.997117e+01 4.067118e+00 8664 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.071501e+00 +total time (s) : 4.067118e+00 total solves : 8664 best bound : 1.997117e+01 simulation ci : 2.275399e+01 ± 4.541987e+00 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/biobjective_hydro/index.html b/previews/PR810/examples/biobjective_hydro/index.html index af4b215cc..c0e83cf48 100644 --- a/previews/PR810/examples/biobjective_hydro/index.html +++ b/previews/PR810/examples/biobjective_hydro/index.html @@ -80,11 +80,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 0.000000e+00 9.856939e-03 36 1 - 10 0.000000e+00 0.000000e+00 2.996993e-02 360 1 + 1 0.000000e+00 0.000000e+00 1.183295e-02 36 1 + 10 0.000000e+00 0.000000e+00 3.175783e-02 360 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.996993e-02 +total time (s) : 3.175783e-02 total solves : 360 best bound : 0.000000e+00 simulation ci : 0.000000e+00 ± 0.000000e+00 @@ -113,11 +113,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.750000e+02 5.500000e+02 3.160954e-03 407 1 - 10 4.500000e+02 5.733959e+02 3.031683e-02 731 1 + 1 6.750000e+02 5.500000e+02 2.875090e-03 407 1 + 10 4.500000e+02 5.733959e+02 2.975512e-02 731 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.031683e-02 +total time (s) : 2.975512e-02 total solves : 731 best bound : 5.733959e+02 simulation ci : 5.000000e+02 ± 1.079583e+02 @@ -146,11 +146,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 4.850000e+02 3.349793e+02 3.589153e-03 778 1 - 10 3.550000e+02 3.468286e+02 3.234601e-02 1102 1 + 1 4.850000e+02 3.349793e+02 3.065825e-03 778 1 + 10 3.550000e+02 3.468286e+02 3.073287e-02 1102 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.234601e-02 +total time (s) : 3.073287e-02 total solves : 1102 best bound : 3.468286e+02 simulation ci : 3.948309e+02 ± 7.954180e+01 @@ -179,11 +179,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.887500e+02 1.995243e+02 2.804041e-03 1149 1 - 10 2.962500e+02 2.052855e+02 3.136206e-02 1473 1 + 1 1.887500e+02 1.995243e+02 2.774954e-03 1149 1 + 10 2.962500e+02 2.052855e+02 2.955508e-02 1473 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.136206e-02 +total time (s) : 2.955508e-02 total solves : 1473 best bound : 2.052855e+02 simulation ci : 2.040201e+02 ± 3.876873e+01 @@ -212,11 +212,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.737500e+02 4.626061e+02 3.409863e-03 1520 1 - 10 2.450000e+02 4.658509e+02 3.441906e-02 1844 1 + 1 3.737500e+02 4.626061e+02 3.571033e-03 1520 1 + 10 2.450000e+02 4.658509e+02 3.385997e-02 1844 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.441906e-02 +total time (s) : 3.385997e-02 total solves : 1844 best bound : 4.658509e+02 simulation ci : 3.907376e+02 ± 9.045105e+01 @@ -245,11 +245,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.675000e+02 1.129545e+02 3.298998e-03 1891 1 - 10 1.362500e+02 1.129771e+02 3.119779e-02 2215 1 + 1 1.675000e+02 1.129545e+02 2.830982e-03 1891 1 + 10 1.362500e+02 1.129771e+02 3.002191e-02 2215 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.119779e-02 +total time (s) : 3.002191e-02 total solves : 2215 best bound : 1.129771e+02 simulation ci : 1.176375e+02 ± 1.334615e+01 @@ -278,11 +278,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.562500e+02 2.788373e+02 3.252983e-03 2262 1 - 10 2.375000e+02 2.795671e+02 3.367996e-02 2586 1 + 1 2.562500e+02 2.788373e+02 3.242016e-03 2262 1 + 10 2.375000e+02 2.795671e+02 3.334594e-02 2586 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.367996e-02 +total time (s) : 3.334594e-02 total solves : 2586 best bound : 2.795671e+02 simulation ci : 2.375000e+02 ± 3.099032e+01 @@ -311,11 +311,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.812500e+02 4.072952e+02 3.753901e-03 2633 1 - 10 5.818750e+02 4.080500e+02 3.627491e-02 2957 1 + 1 3.812500e+02 4.072952e+02 3.582001e-03 2633 1 + 10 5.818750e+02 4.080500e+02 3.523493e-02 2957 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.627491e-02 +total time (s) : 3.523493e-02 total solves : 2957 best bound : 4.080500e+02 simulation ci : 4.235323e+02 ± 1.029245e+02 @@ -344,11 +344,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 8.525000e+02 5.197742e+02 3.638983e-03 3004 1 - 10 4.493750e+02 5.211793e+02 3.738189e-02 3328 1 + 1 8.525000e+02 5.197742e+02 3.494978e-03 3004 1 + 10 4.493750e+02 5.211793e+02 3.625989e-02 3328 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.738189e-02 +total time (s) : 3.625989e-02 total solves : 3328 best bound : 5.211793e+02 simulation ci : 5.268125e+02 ± 1.227709e+02 @@ -377,13 +377,13 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.437500e+01 5.937500e+01 3.119946e-03 3375 1 - 10 3.750000e+01 5.938557e+01 3.180599e-02 3699 1 + 1 3.437500e+01 5.937500e+01 3.304005e-03 3375 1 + 10 3.750000e+01 5.938557e+01 3.113914e-02 3699 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.180599e-02 +total time (s) : 3.113914e-02 total solves : 3699 best bound : 5.938557e+01 simulation ci : 5.906250e+01 ± 1.352595e+01 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/booking_management/index.html b/previews/PR810/examples/booking_management/index.html index 908845260..173e770fc 100644 --- a/previews/PR810/examples/booking_management/index.html +++ b/previews/PR810/examples/booking_management/index.html @@ -96,4 +96,4 @@ end end -booking_management(SDDP.ContinuousConicDuality())
Test Passed

New version of HiGHS stalls booking_management(SDDP.LagrangianDuality())

+booking_management(SDDP.ContinuousConicDuality())
Test Passed

New version of HiGHS stalls booking_management(SDDP.LagrangianDuality())

diff --git a/previews/PR810/examples/generation_expansion/index.html b/previews/PR810/examples/generation_expansion/index.html index 7f7ba56e0..56b3ce415 100644 --- a/previews/PR810/examples/generation_expansion/index.html +++ b/previews/PR810/examples/generation_expansion/index.html @@ -115,15 +115,15 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 2.549668e+06 2.078257e+06 5.268471e-01 920 1 - 20 5.494568e+05 2.078257e+06 7.221272e-01 1340 1 - 30 4.985879e+04 2.078257e+06 1.268374e+00 2260 1 - 40 3.799447e+06 2.078257e+06 1.469243e+00 2680 1 - 50 1.049867e+06 2.078257e+06 2.024358e+00 3600 1 - 60 3.985191e+04 2.078257e+06 2.228389e+00 4020 1 + 10 2.549668e+06 2.078257e+06 5.136688e-01 920 1 + 20 5.494568e+05 2.078257e+06 7.072258e-01 1340 1 + 30 4.985879e+04 2.078257e+06 1.252619e+00 2260 1 + 40 3.799447e+06 2.078257e+06 1.450223e+00 2680 1 + 50 1.049867e+06 2.078257e+06 2.001015e+00 3600 1 + 60 3.985191e+04 2.078257e+06 2.201725e+00 4020 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.228389e+00 +total time (s) : 2.201725e+00 total solves : 4020 best bound : 2.078257e+06 simulation ci : 2.031697e+06 ± 3.922745e+05 @@ -157,17 +157,17 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10L 4.986663e+04 2.079119e+06 9.470429e-01 920 1 - 20L 3.799878e+06 2.079330e+06 1.769630e+00 1340 1 - 30L 3.003923e+04 2.079457e+06 2.897375e+00 2260 1 - 40L 5.549882e+06 2.079457e+06 3.708309e+00 2680 1 - 50L 2.799466e+06 2.079457e+06 4.901440e+00 3600 1 - 60L 3.549880e+06 2.079457e+06 5.673556e+00 4020 1 + 10L 4.986663e+04 2.079119e+06 9.293642e-01 920 1 + 20L 3.799878e+06 2.079330e+06 1.628068e+00 1340 1 + 30L 3.003923e+04 2.079457e+06 2.727293e+00 2260 1 + 40L 5.549882e+06 2.079457e+06 3.592740e+00 2680 1 + 50L 2.799466e+06 2.079457e+06 4.766337e+00 3600 1 + 60L 3.549880e+06 2.079457e+06 5.530339e+00 4020 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 5.673556e+00 +total time (s) : 5.530339e+00 total solves : 4020 best bound : 2.079457e+06 simulation ci : 2.352204e+06 ± 5.377531e+05 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/hydro_valley/index.html b/previews/PR810/examples/hydro_valley/index.html index bcd7f3f5a..33dfe404a 100644 --- a/previews/PR810/examples/hydro_valley/index.html +++ b/previews/PR810/examples/hydro_valley/index.html @@ -280,4 +280,4 @@ ### = $835 end -test_hydro_valley_model()
Test Passed
+test_hydro_valley_model()
Test Passed
diff --git a/previews/PR810/examples/infinite_horizon_hydro_thermal/index.html b/previews/PR810/examples/infinite_horizon_hydro_thermal/index.html index 14df05630..4862233b2 100644 --- a/previews/PR810/examples/infinite_horizon_hydro_thermal/index.html +++ b/previews/PR810/examples/infinite_horizon_hydro_thermal/index.html @@ -93,13 +93,13 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 100 2.500000e+01 1.188965e+02 8.112071e-01 1946 1 - 200 2.500000e+01 1.191634e+02 1.018552e+00 3920 1 - 300 0.000000e+00 1.191666e+02 1.229901e+00 5902 1 - 330 2.500000e+01 1.191667e+02 1.271405e+00 6224 1 + 100 2.500000e+01 1.188965e+02 7.920020e-01 1946 1 + 200 2.500000e+01 1.191634e+02 1.002857e+00 3920 1 + 300 0.000000e+00 1.191666e+02 1.212818e+00 5902 1 + 330 2.500000e+01 1.191667e+02 1.254757e+00 6224 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.271405e+00 +total time (s) : 1.254757e+00 total solves : 6224 best bound : 1.191667e+02 simulation ci : 2.158333e+01 ± 3.290252e+00 @@ -132,16 +132,16 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 100 0.000000e+00 1.191285e+02 2.937579e-01 2874 1 - 200 2.500000e+00 1.191666e+02 5.252440e-01 4855 1 - 282 7.500000e+00 1.191667e+02 6.570981e-01 5733 1 + 100 0.000000e+00 1.191285e+02 2.993159e-01 2874 1 + 200 2.500000e+00 1.191666e+02 5.335588e-01 4855 1 + 282 7.500000e+00 1.191667e+02 6.701078e-01 5733 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.570981e-01 +total time (s) : 6.701078e-01 total solves : 5733 best bound : 1.191667e+02 simulation ci : 2.104610e+01 ± 3.492245e+00 numeric issues : 0 ------------------------------------------------------------------- -Confidence_interval = 116.06 ± 13.65 +Confidence_interval = 116.06 ± 13.65 diff --git a/previews/PR810/examples/infinite_horizon_trivial/index.html b/previews/PR810/examples/infinite_horizon_trivial/index.html index 7ddb7f847..4071abb72 100644 --- a/previews/PR810/examples/infinite_horizon_trivial/index.html +++ b/previews/PR810/examples/infinite_horizon_trivial/index.html @@ -49,15 +49,15 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.000000e+00 1.997089e+01 6.802011e-02 1204 1 - 20 8.000000e+00 2.000000e+01 8.882618e-02 1420 1 - 30 1.600000e+01 2.000000e+01 1.561842e-01 2628 1 - 40 8.000000e+00 2.000000e+01 1.775842e-01 2834 1 + 10 4.000000e+00 1.997089e+01 7.041693e-02 1204 1 + 20 8.000000e+00 2.000000e+01 9.154296e-02 1420 1 + 30 1.600000e+01 2.000000e+01 1.614270e-01 2628 1 + 40 8.000000e+00 2.000000e+01 1.835001e-01 2834 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.775842e-01 +total time (s) : 1.835001e-01 total solves : 2834 best bound : 2.000000e+01 simulation ci : 1.625000e+01 ± 4.766381e+00 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/no_strong_duality/index.html b/previews/PR810/examples/no_strong_duality/index.html index 4d6bc5e7c..0475fab73 100644 --- a/previews/PR810/examples/no_strong_duality/index.html +++ b/previews/PR810/examples/no_strong_duality/index.html @@ -48,13 +48,13 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.000000e+00 1.500000e+00 1.579046e-03 3 1 - 40 4.000000e+00 2.000000e+00 4.310894e-02 578 1 + 1 1.000000e+00 1.500000e+00 1.657009e-03 3 1 + 40 4.000000e+00 2.000000e+00 4.375005e-02 578 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.310894e-02 +total time (s) : 4.375005e-02 total solves : 578 best bound : 2.000000e+00 simulation ci : 1.950000e+00 ± 5.568095e-01 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/objective_state_newsvendor/index.html b/previews/PR810/examples/objective_state_newsvendor/index.html index 723f3dcc3..13b698282 100644 --- a/previews/PR810/examples/objective_state_newsvendor/index.html +++ b/previews/PR810/examples/objective_state_newsvendor/index.html @@ -93,138 +93,137 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 5.250000e+00 4.888859e+00 1.683509e-01 1350 1 - 20 4.350000e+00 4.105855e+00 2.543921e-01 2700 1 - 30 5.000000e+00 4.100490e+00 3.491671e-01 4050 1 - 40 3.500000e+00 4.097376e+00 4.512410e-01 5400 1 - 50 5.250000e+00 4.095859e+00 5.562651e-01 6750 1 - 60 3.643750e+00 4.093342e+00 6.656621e-01 8100 1 - 70 2.643750e+00 4.091818e+00 7.767341e-01 9450 1 - 80 5.087500e+00 4.091591e+00 8.888431e-01 10800 1 - 90 5.062500e+00 4.091309e+00 1.001833e+00 12150 1 - 100 4.843750e+00 4.087004e+00 1.123285e+00 13500 1 - 110 3.437500e+00 4.086094e+00 1.244990e+00 14850 1 - 120 3.375000e+00 4.085926e+00 1.408693e+00 16200 1 - 130 5.025000e+00 4.085866e+00 1.534998e+00 17550 1 - 140 5.000000e+00 4.085734e+00 1.663436e+00 18900 1 - 150 3.500000e+00 4.085655e+00 1.794033e+00 20250 1 - 160 4.281250e+00 4.085454e+00 1.920855e+00 21600 1 - 170 4.562500e+00 4.085425e+00 2.050150e+00 22950 1 - 180 5.768750e+00 4.085425e+00 2.179474e+00 24300 1 - 190 3.468750e+00 4.085359e+00 2.315671e+00 25650 1 - 200 4.131250e+00 4.085225e+00 2.451789e+00 27000 1 - 210 4.512500e+00 4.085157e+00 2.584829e+00 28350 1 - 220 4.900000e+00 4.085153e+00 2.718612e+00 29700 1 - 230 4.025000e+00 4.085134e+00 2.857343e+00 31050 1 - 240 4.468750e+00 4.085116e+00 2.997363e+00 32400 1 - 250 4.062500e+00 4.085075e+00 3.135554e+00 33750 1 - 260 4.875000e+00 4.085037e+00 3.276477e+00 35100 1 - 270 3.850000e+00 4.085011e+00 3.417285e+00 36450 1 - 280 4.912500e+00 4.084992e+00 3.559096e+00 37800 1 - 290 2.987500e+00 4.084986e+00 3.706907e+00 39150 1 - 300 3.825000e+00 4.084957e+00 3.857739e+00 40500 1 - 310 3.250000e+00 4.084911e+00 4.005768e+00 41850 1 - 320 3.600000e+00 4.084896e+00 4.189179e+00 43200 1 - 330 3.925000e+00 4.084896e+00 4.326860e+00 44550 1 - 340 4.500000e+00 4.084893e+00 4.471705e+00 45900 1 - 350 5.000000e+00 4.084891e+00 4.616697e+00 47250 1 - 360 3.075000e+00 4.084866e+00 4.760912e+00 48600 1 - 370 3.500000e+00 4.084861e+00 4.914282e+00 49950 1 - 380 3.356250e+00 4.084857e+00 5.065943e+00 51300 1 - 390 5.500000e+00 4.084846e+00 5.224891e+00 52650 1 - 400 4.475000e+00 4.084846e+00 5.375746e+00 54000 1 - 410 3.750000e+00 4.084843e+00 5.528786e+00 55350 1 - 420 3.687500e+00 4.084843e+00 5.685172e+00 56700 1 - 430 4.337500e+00 4.084825e+00 5.842672e+00 58050 1 - 440 5.750000e+00 4.084825e+00 5.985536e+00 59400 1 - 450 4.925000e+00 4.084792e+00 6.144700e+00 60750 1 - 460 3.600000e+00 4.084792e+00 6.301010e+00 62100 1 - 470 4.387500e+00 4.084792e+00 6.451552e+00 63450 1 - 480 4.000000e+00 4.084792e+00 6.612888e+00 64800 1 - 490 2.975000e+00 4.084788e+00 6.766935e+00 66150 1 - 500 3.125000e+00 4.084788e+00 6.926830e+00 67500 1 - 510 4.250000e+00 4.084788e+00 7.090429e+00 68850 1 - 520 4.512500e+00 4.084786e+00 7.242002e+00 70200 1 - 530 3.875000e+00 4.084786e+00 7.432644e+00 71550 1 - 540 4.387500e+00 4.084781e+00 7.593983e+00 72900 1 - 550 5.281250e+00 4.084780e+00 7.758549e+00 74250 1 - 560 4.650000e+00 4.084780e+00 7.910437e+00 75600 1 - 570 3.062500e+00 4.084780e+00 8.066783e+00 76950 1 - 580 3.187500e+00 4.084780e+00 8.217201e+00 78300 1 - 590 3.812500e+00 4.084780e+00 8.365221e+00 79650 1 - 600 3.637500e+00 4.084774e+00 8.526644e+00 81000 1 - 610 3.950000e+00 4.084765e+00 8.685438e+00 82350 1 - 620 4.625000e+00 4.084760e+00 8.844647e+00 83700 1 - 630 4.218750e+00 4.084760e+00 9.010699e+00 85050 1 - 640 3.025000e+00 4.084755e+00 9.176248e+00 86400 1 - 650 2.993750e+00 4.084751e+00 9.331018e+00 87750 1 - 660 3.262500e+00 4.084746e+00 9.488959e+00 89100 1 - 670 3.625000e+00 4.084746e+00 9.650297e+00 90450 1 - 680 2.981250e+00 4.084746e+00 9.813815e+00 91800 1 - 690 4.187500e+00 4.084746e+00 9.973102e+00 93150 1 - 700 4.500000e+00 4.084746e+00 1.013052e+01 94500 1 - 710 3.225000e+00 4.084746e+00 1.031526e+01 95850 1 - 720 4.375000e+00 4.084746e+00 1.047825e+01 97200 1 - 730 2.650000e+00 4.084746e+00 1.064420e+01 98550 1 - 740 3.250000e+00 4.084746e+00 1.080394e+01 99900 1 - 750 4.725000e+00 4.084746e+00 1.098249e+01 101250 1 - 760 3.375000e+00 4.084746e+00 1.115689e+01 102600 1 - 770 5.375000e+00 4.084746e+00 1.132852e+01 103950 1 - 780 4.068750e+00 4.084746e+00 1.150713e+01 105300 1 - 790 4.412500e+00 4.084746e+00 1.168515e+01 106650 1 - 800 4.350000e+00 4.084746e+00 1.185926e+01 108000 1 - 810 5.887500e+00 4.084746e+00 1.203388e+01 109350 1 - 820 4.912500e+00 4.084746e+00 1.220431e+01 110700 1 - 830 4.387500e+00 4.084746e+00 1.236742e+01 112050 1 - 840 3.675000e+00 4.084746e+00 1.253843e+01 113400 1 - 850 5.375000e+00 4.084746e+00 1.270327e+01 114750 1 - 860 3.562500e+00 4.084746e+00 1.287816e+01 116100 1 - 870 3.075000e+00 4.084746e+00 1.305300e+01 117450 1 - 880 3.625000e+00 4.084746e+00 1.324599e+01 118800 1 - 890 2.937500e+00 4.084746e+00 1.341205e+01 120150 1 - 900 4.450000e+00 4.084746e+00 1.358534e+01 121500 1 - 910 4.200000e+00 4.084746e+00 1.375676e+01 122850 1 - 920 3.687500e+00 4.084746e+00 1.393507e+01 124200 1 - 930 4.725000e+00 4.084746e+00 1.411157e+01 125550 1 - 940 4.018750e+00 4.084746e+00 1.428147e+01 126900 1 - 950 4.675000e+00 4.084746e+00 1.444809e+01 128250 1 - 960 3.375000e+00 4.084746e+00 1.461212e+01 129600 1 - 970 3.812500e+00 4.084746e+00 1.477471e+01 130950 1 - 980 3.112500e+00 4.084746e+00 1.494355e+01 132300 1 - 990 3.600000e+00 4.084746e+00 1.511399e+01 133650 1 - 1000 5.500000e+00 4.084746e+00 1.529316e+01 135000 1 - 1010 3.187500e+00 4.084746e+00 1.546689e+01 136350 1 - 1020 4.900000e+00 4.084746e+00 1.565817e+01 137700 1 - 1030 3.637500e+00 4.084746e+00 1.584541e+01 139050 1 - 1040 3.975000e+00 4.084746e+00 1.602294e+01 140400 1 - 1050 4.750000e+00 4.084746e+00 1.620150e+01 141750 1 - 1060 4.437500e+00 4.084746e+00 1.639548e+01 143100 1 - 1070 5.000000e+00 4.084746e+00 1.657688e+01 144450 1 - 1080 4.143750e+00 4.084746e+00 1.676057e+01 145800 1 - 1090 5.625000e+00 4.084746e+00 1.693456e+01 147150 1 - 1100 3.475000e+00 4.084746e+00 1.711473e+01 148500 1 - 1110 4.156250e+00 4.084746e+00 1.730374e+01 149850 1 - 1120 4.450000e+00 4.084746e+00 1.748936e+01 151200 1 - 1130 3.312500e+00 4.084741e+00 1.767268e+01 152550 1 - 1140 5.375000e+00 4.084741e+00 1.784687e+01 153900 1 - 1150 4.800000e+00 4.084737e+00 1.805750e+01 155250 1 - 1160 3.300000e+00 4.084737e+00 1.824013e+01 156600 1 - 1170 4.356250e+00 4.084737e+00 1.842075e+01 157950 1 - 1180 3.900000e+00 4.084737e+00 1.860576e+01 159300 1 - 1190 4.450000e+00 4.084737e+00 1.879230e+01 160650 1 - 1200 5.156250e+00 4.084737e+00 1.897893e+01 162000 1 - 1210 4.500000e+00 4.084737e+00 1.915242e+01 163350 1 - 1220 4.875000e+00 4.084737e+00 1.935177e+01 164700 1 - 1230 4.000000e+00 4.084737e+00 1.953384e+01 166050 1 - 1240 4.062500e+00 4.084737e+00 1.972081e+01 167400 1 - 1250 5.450000e+00 4.084737e+00 1.991335e+01 168750 1 - 1255 3.693750e+00 4.084737e+00 2.002731e+01 169425 1 + 10 5.250000e+00 4.888859e+00 1.705430e-01 1350 1 + 20 4.350000e+00 4.105855e+00 2.555871e-01 2700 1 + 30 5.000000e+00 4.100490e+00 3.504701e-01 4050 1 + 40 3.500000e+00 4.097376e+00 4.541450e-01 5400 1 + 50 5.250000e+00 4.095859e+00 5.634019e-01 6750 1 + 60 3.643750e+00 4.093342e+00 6.772101e-01 8100 1 + 70 2.643750e+00 4.091818e+00 7.898800e-01 9450 1 + 80 5.087500e+00 4.091591e+00 9.059670e-01 10800 1 + 90 5.062500e+00 4.091309e+00 1.022321e+00 12150 1 + 100 4.843750e+00 4.087004e+00 1.147200e+00 13500 1 + 110 3.437500e+00 4.086094e+00 1.273122e+00 14850 1 + 120 3.375000e+00 4.085926e+00 1.401038e+00 16200 1 + 130 5.025000e+00 4.085866e+00 1.528921e+00 17550 1 + 140 5.000000e+00 4.085734e+00 1.657126e+00 18900 1 + 150 3.500000e+00 4.085655e+00 1.786854e+00 20250 1 + 160 4.281250e+00 4.085454e+00 1.919327e+00 21600 1 + 170 4.562500e+00 4.085425e+00 2.049539e+00 22950 1 + 180 5.768750e+00 4.085425e+00 2.179516e+00 24300 1 + 190 3.468750e+00 4.085359e+00 2.315713e+00 25650 1 + 200 4.131250e+00 4.085225e+00 2.450225e+00 27000 1 + 210 4.512500e+00 4.085157e+00 2.620751e+00 28350 1 + 220 4.900000e+00 4.085153e+00 2.755297e+00 29700 1 + 230 4.025000e+00 4.085134e+00 2.892293e+00 31050 1 + 240 4.468750e+00 4.085116e+00 3.035621e+00 32400 1 + 250 4.062500e+00 4.085075e+00 3.175253e+00 33750 1 + 260 4.875000e+00 4.085037e+00 3.317470e+00 35100 1 + 270 3.850000e+00 4.085011e+00 3.459588e+00 36450 1 + 280 4.912500e+00 4.084992e+00 3.602599e+00 37800 1 + 290 2.987500e+00 4.084986e+00 3.751416e+00 39150 1 + 300 3.825000e+00 4.084957e+00 3.901049e+00 40500 1 + 310 3.250000e+00 4.084911e+00 4.051168e+00 41850 1 + 320 3.600000e+00 4.084896e+00 4.199420e+00 43200 1 + 330 3.925000e+00 4.084896e+00 4.338111e+00 44550 1 + 340 4.500000e+00 4.084893e+00 4.485896e+00 45900 1 + 350 5.000000e+00 4.084891e+00 4.637206e+00 47250 1 + 360 3.075000e+00 4.084866e+00 4.782855e+00 48600 1 + 370 3.500000e+00 4.084861e+00 4.940265e+00 49950 1 + 380 3.356250e+00 4.084857e+00 5.100719e+00 51300 1 + 390 5.500000e+00 4.084846e+00 5.264359e+00 52650 1 + 400 4.475000e+00 4.084846e+00 5.414810e+00 54000 1 + 410 3.750000e+00 4.084843e+00 5.566031e+00 55350 1 + 420 3.687500e+00 4.084843e+00 5.723553e+00 56700 1 + 430 4.337500e+00 4.084825e+00 5.882115e+00 58050 1 + 440 5.750000e+00 4.084825e+00 6.031330e+00 59400 1 + 450 4.925000e+00 4.084792e+00 6.232176e+00 60750 1 + 460 3.600000e+00 4.084792e+00 6.388376e+00 62100 1 + 470 4.387500e+00 4.084792e+00 6.539225e+00 63450 1 + 480 4.000000e+00 4.084792e+00 6.701254e+00 64800 1 + 490 2.975000e+00 4.084788e+00 6.855894e+00 66150 1 + 500 3.125000e+00 4.084788e+00 7.010362e+00 67500 1 + 510 4.250000e+00 4.084788e+00 7.175062e+00 68850 1 + 520 4.512500e+00 4.084786e+00 7.327173e+00 70200 1 + 530 3.875000e+00 4.084786e+00 7.490970e+00 71550 1 + 540 4.387500e+00 4.084781e+00 7.651197e+00 72900 1 + 550 5.281250e+00 4.084780e+00 7.814944e+00 74250 1 + 560 4.650000e+00 4.084780e+00 7.966079e+00 75600 1 + 570 3.062500e+00 4.084780e+00 8.121943e+00 76950 1 + 580 3.187500e+00 4.084780e+00 8.274240e+00 78300 1 + 590 3.812500e+00 4.084780e+00 8.426245e+00 79650 1 + 600 3.637500e+00 4.084774e+00 8.585113e+00 81000 1 + 610 3.950000e+00 4.084765e+00 8.743541e+00 82350 1 + 620 4.625000e+00 4.084760e+00 8.899034e+00 83700 1 + 630 4.218750e+00 4.084760e+00 9.059865e+00 85050 1 + 640 3.025000e+00 4.084755e+00 9.229526e+00 86400 1 + 650 2.993750e+00 4.084751e+00 9.381509e+00 87750 1 + 660 3.262500e+00 4.084746e+00 9.537901e+00 89100 1 + 670 3.625000e+00 4.084746e+00 9.698592e+00 90450 1 + 680 2.981250e+00 4.084746e+00 9.886476e+00 91800 1 + 690 4.187500e+00 4.084746e+00 1.004398e+01 93150 1 + 700 4.500000e+00 4.084746e+00 1.020858e+01 94500 1 + 710 3.225000e+00 4.084746e+00 1.036861e+01 95850 1 + 720 4.375000e+00 4.084746e+00 1.053133e+01 97200 1 + 730 2.650000e+00 4.084746e+00 1.070257e+01 98550 1 + 740 3.250000e+00 4.084746e+00 1.086710e+01 99900 1 + 750 4.725000e+00 4.084746e+00 1.104115e+01 101250 1 + 760 3.375000e+00 4.084746e+00 1.122765e+01 102600 1 + 770 5.375000e+00 4.084746e+00 1.139841e+01 103950 1 + 780 4.068750e+00 4.084746e+00 1.157084e+01 105300 1 + 790 4.412500e+00 4.084746e+00 1.174784e+01 106650 1 + 800 4.350000e+00 4.084746e+00 1.192131e+01 108000 1 + 810 5.887500e+00 4.084746e+00 1.209914e+01 109350 1 + 820 4.912500e+00 4.084746e+00 1.227945e+01 110700 1 + 830 4.387500e+00 4.084746e+00 1.244912e+01 112050 1 + 840 3.675000e+00 4.084746e+00 1.262362e+01 113400 1 + 850 5.375000e+00 4.084746e+00 1.279167e+01 114750 1 + 860 3.562500e+00 4.084746e+00 1.296710e+01 116100 1 + 870 3.075000e+00 4.084746e+00 1.317081e+01 117450 1 + 880 3.625000e+00 4.084746e+00 1.334154e+01 118800 1 + 890 2.937500e+00 4.084746e+00 1.350708e+01 120150 1 + 900 4.450000e+00 4.084746e+00 1.368155e+01 121500 1 + 910 4.200000e+00 4.084746e+00 1.385327e+01 122850 1 + 920 3.687500e+00 4.084746e+00 1.403149e+01 124200 1 + 930 4.725000e+00 4.084746e+00 1.420675e+01 125550 1 + 940 4.018750e+00 4.084746e+00 1.438224e+01 126900 1 + 950 4.675000e+00 4.084746e+00 1.454988e+01 128250 1 + 960 3.375000e+00 4.084746e+00 1.471849e+01 129600 1 + 970 3.812500e+00 4.084746e+00 1.488782e+01 130950 1 + 980 3.112500e+00 4.084746e+00 1.506232e+01 132300 1 + 990 3.600000e+00 4.084746e+00 1.523514e+01 133650 1 + 1000 5.500000e+00 4.084746e+00 1.541459e+01 135000 1 + 1010 3.187500e+00 4.084746e+00 1.558135e+01 136350 1 + 1020 4.900000e+00 4.084746e+00 1.577494e+01 137700 1 + 1030 3.637500e+00 4.084746e+00 1.596098e+01 139050 1 + 1040 3.975000e+00 4.084746e+00 1.613504e+01 140400 1 + 1050 4.750000e+00 4.084746e+00 1.631285e+01 141750 1 + 1060 4.437500e+00 4.084746e+00 1.650971e+01 143100 1 + 1070 5.000000e+00 4.084746e+00 1.669296e+01 144450 1 + 1080 4.143750e+00 4.084746e+00 1.687549e+01 145800 1 + 1090 5.625000e+00 4.084746e+00 1.705005e+01 147150 1 + 1100 3.475000e+00 4.084746e+00 1.723784e+01 148500 1 + 1110 4.156250e+00 4.084746e+00 1.743519e+01 149850 1 + 1120 4.450000e+00 4.084746e+00 1.762047e+01 151200 1 + 1130 3.312500e+00 4.084741e+00 1.780914e+01 152550 1 + 1140 5.375000e+00 4.084741e+00 1.798710e+01 153900 1 + 1150 4.800000e+00 4.084737e+00 1.817912e+01 155250 1 + 1160 3.300000e+00 4.084737e+00 1.838765e+01 156600 1 + 1170 4.356250e+00 4.084737e+00 1.857317e+01 157950 1 + 1180 3.900000e+00 4.084737e+00 1.877175e+01 159300 1 + 1190 4.450000e+00 4.084737e+00 1.896955e+01 160650 1 + 1200 5.156250e+00 4.084737e+00 1.916040e+01 162000 1 + 1210 4.500000e+00 4.084737e+00 1.933501e+01 163350 1 + 1220 4.875000e+00 4.084737e+00 1.953455e+01 164700 1 + 1230 4.000000e+00 4.084737e+00 1.971705e+01 166050 1 + 1240 4.062500e+00 4.084737e+00 1.990034e+01 167400 1 + 1246 3.000000e+00 4.084737e+00 2.001316e+01 168210 1 ------------------------------------------------------------------- status : time_limit -total time (s) : 2.002731e+01 -total solves : 169425 +total time (s) : 2.001316e+01 +total solves : 168210 best bound : 4.084737e+00 -simulation ci : 4.071739e+00 ± 4.036551e-02 +simulation ci : 4.071445e+00 ± 4.036229e-02 numeric issues : 0 ------------------------------------------------------------------- @@ -254,28 +253,28 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 5.237500e+00 4.355124e+00 2.079720e-01 1350 1 - 20 3.162500e+00 4.048915e+00 5.789781e-01 2700 1 - 30 4.125000e+00 4.043948e+00 1.084584e+00 4050 1 - 40 2.975000e+00 4.041052e+00 1.732931e+00 5400 1 - 50 4.781250e+00 4.040641e+00 2.441468e+00 6750 1 - 60 5.156250e+00 4.040393e+00 3.348012e+00 8100 1 - 70 2.750000e+00 4.039305e+00 4.342502e+00 9450 1 - 80 4.225000e+00 4.039111e+00 5.449113e+00 10800 1 - 90 2.737500e+00 4.039025e+00 6.629321e+00 12150 1 - 100 4.006250e+00 4.038936e+00 8.631956e+00 13500 1 - 110 4.662500e+00 4.038867e+00 1.004983e+01 14850 1 - 120 4.300000e+00 4.038845e+00 1.156762e+01 16200 1 - 130 4.875000e+00 4.038784e+00 1.325238e+01 17550 1 - 140 3.975000e+00 4.038782e+00 1.500369e+01 18900 1 - 150 3.525000e+00 4.038772e+00 1.684451e+01 20250 1 - 160 4.100000e+00 4.037588e+00 1.884064e+01 21600 1 - 166 4.875000e+00 4.037588e+00 2.001337e+01 22410 1 + 10 4.512500e+00 4.066874e+00 2.024860e-01 1350 1 + 20 5.062500e+00 4.040569e+00 5.491850e-01 2700 1 + 30 4.968750e+00 4.039400e+00 1.065618e+00 4050 1 + 40 4.125000e+00 4.039286e+00 1.716066e+00 5400 1 + 50 3.925000e+00 4.039078e+00 2.594418e+00 6750 1 + 60 3.875000e+00 4.039004e+00 3.512367e+00 8100 1 + 70 3.918750e+00 4.039008e+00 4.632966e+00 9450 1 + 80 3.600000e+00 4.038911e+00 5.784783e+00 10800 1 + 90 4.250000e+00 4.038874e+00 7.099883e+00 12150 1 + 100 5.400000e+00 4.038820e+00 8.481728e+00 13500 1 + 110 3.000000e+00 4.038795e+00 1.000387e+01 14850 1 + 120 3.000000e+00 4.038812e+00 1.159739e+01 16200 1 + 130 2.993750e+00 4.038782e+00 1.328464e+01 17550 1 + 140 4.406250e+00 4.038770e+00 1.515294e+01 18900 1 + 150 5.625000e+00 4.038777e+00 1.708383e+01 20250 1 + 160 3.081250e+00 4.038772e+00 1.906493e+01 21600 1 + 165 5.006250e+00 4.038772e+00 2.015715e+01 22275 1 ------------------------------------------------------------------- status : time_limit -total time (s) : 2.001337e+01 -total solves : 22410 -best bound : 4.037588e+00 -simulation ci : 4.057982e+00 ± 1.153783e-01 +total time (s) : 2.015715e+01 +total solves : 22275 +best bound : 4.038772e+00 +simulation ci : 4.070947e+00 ± 1.188614e-01 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/sldp_example_one/index.html b/previews/PR810/examples/sldp_example_one/index.html index afa562193..41e360e45 100644 --- a/previews/PR810/examples/sldp_example_one/index.html +++ b/previews/PR810/examples/sldp_example_one/index.html @@ -65,20 +65,17 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 3.090635e+00 1.166665e+00 3.902190e-01 1680 1 - 20 2.964782e+00 1.166901e+00 4.868731e-01 2560 1 - 30 3.053110e+00 1.166901e+00 8.868811e-01 4240 1 - 40 3.055726e+00 1.166901e+00 9.855461e-01 5120 1 - 50 2.904107e+00 1.166901e+00 1.386374e+00 6800 1 - 60 2.903935e+00 1.167416e+00 1.491052e+00 7680 1 - 70 3.268068e+00 1.167416e+00 1.896434e+00 9360 1 - 80 3.556081e+00 1.167416e+00 2.002634e+00 10240 1 - 82 3.444568e+00 1.167416e+00 2.023760e+00 10416 1 + 10 3.426289e+00 1.163128e+00 3.805249e-01 1680 1 + 20 2.386729e+00 1.163467e+00 4.746521e-01 2560 1 + 30 3.405925e+00 1.165481e+00 8.518538e-01 4240 1 + 40 3.219206e+00 1.165481e+00 9.531829e-01 5120 1 + 50 3.074686e+00 1.165481e+00 1.339746e+00 6800 1 + 60 3.224080e+00 1.165481e+00 1.440270e+00 7680 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.023760e+00 -total solves : 10416 -best bound : 1.167416e+00 -simulation ci : 3.228310e+00 ± 9.616073e-02 +total time (s) : 1.440270e+00 +total solves : 7680 +best bound : 1.165481e+00 +simulation ci : 3.299213e+00 ± 1.277496e-01 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/sldp_example_two/index.html b/previews/PR810/examples/sldp_example_two/index.html index 9e996d122..a41d76931 100644 --- a/previews/PR810/examples/sldp_example_two/index.html +++ b/previews/PR810/examples/sldp_example_two/index.html @@ -92,16 +92,16 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -4.000000e+01 -5.809615e+01 3.294420e-02 78 1 - 20 -9.800000e+01 -5.809615e+01 6.572199e-02 148 1 - 30 -4.000000e+01 -5.809615e+01 1.048801e-01 226 1 - 40 -9.800000e+01 -5.809615e+01 1.386862e-01 296 1 + 10 -4.000000e+01 -5.809615e+01 3.150392e-02 78 1 + 20 -4.000000e+01 -5.809615e+01 6.383395e-02 148 1 + 30 -4.700000e+01 -5.809615e+01 1.036179e-01 226 1 + 40 -4.000000e+01 -5.809615e+01 1.382520e-01 296 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.386862e-01 +total time (s) : 1.382520e-01 total solves : 296 best bound : -5.809615e+01 -simulation ci : -5.086250e+01 ± 6.568136e+00 +simulation ci : -5.188750e+01 ± 7.419070e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -133,16 +133,16 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -9.800000e+01 -6.196125e+01 4.067993e-02 138 1 - 20 -8.200000e+01 -6.196125e+01 7.866907e-02 258 1 - 30 -9.800000e+01 -6.196125e+01 1.297381e-01 396 1 - 40 -8.200000e+01 -6.196125e+01 1.685491e-01 516 1 + 10 -4.700000e+01 -6.196125e+01 4.114795e-02 138 1 + 20 -9.800000e+01 -6.196125e+01 7.786107e-02 258 1 + 30 -7.500000e+01 -6.196125e+01 1.281550e-01 396 1 + 40 -6.300000e+01 -6.196125e+01 1.660991e-01 516 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.685491e-01 +total time (s) : 1.660991e-01 total solves : 516 best bound : -6.196125e+01 -simulation ci : -5.836250e+01 ± 5.879370e+00 +simulation ci : -5.548750e+01 ± 5.312051e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -174,15 +174,15 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -4.000000e+01 -6.546793e+01 7.774401e-02 462 1 - 20 -4.000000e+01 -6.546793e+01 1.385040e-01 852 1 - 30 -5.400000e+01 -6.546793e+01 2.561040e-01 1314 1 - 40 -4.700000e+01 -6.546793e+01 3.174150e-01 1704 1 + 10 -8.200000e+01 -6.546793e+01 7.628012e-02 462 1 + 20 -7.000000e+01 -6.546793e+01 1.390240e-01 852 1 + 30 -6.300000e+01 -6.546793e+01 2.592950e-01 1314 1 + 40 -4.700000e+01 -6.546793e+01 3.213410e-01 1704 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.174150e-01 +total time (s) : 3.213410e-01 total solves : 1704 best bound : -6.546793e+01 -simulation ci : -6.113750e+01 ± 4.795224e+00 +simulation ci : -6.263750e+01 ± 5.346304e+00 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/stochastic_all_blacks/index.html b/previews/PR810/examples/stochastic_all_blacks/index.html index 8209750ea..af8e6d88d 100644 --- a/previews/PR810/examples/stochastic_all_blacks/index.html +++ b/previews/PR810/examples/stochastic_all_blacks/index.html @@ -77,13 +77,13 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 6.000000e+00 1.200000e+01 4.242802e-02 11 1 - 40L 6.000000e+00 8.000000e+00 4.905179e-01 602 1 + 1L 3.000000e+00 1.422222e+01 4.334402e-02 11 1 + 40L 6.000000e+00 8.000000e+00 5.495031e-01 602 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.905179e-01 +total time (s) : 5.495031e-01 total solves : 602 best bound : 8.000000e+00 -simulation ci : 8.250000e+00 ± 9.356503e-01 +simulation ci : 7.125000e+00 ± 7.499254e-01 numeric issues : 0 -------------------------------------------------------------------- +------------------------------------------------------------------- diff --git a/previews/PR810/examples/the_farmers_problem/index.html b/previews/PR810/examples/the_farmers_problem/index.html index 68a672f83..e8b2f0293 100644 --- a/previews/PR810/examples/the_farmers_problem/index.html +++ b/previews/PR810/examples/the_farmers_problem/index.html @@ -125,13 +125,13 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 -9.800000e+04 4.922260e+05 8.815098e-02 6 1 - 40 1.670000e+05 1.083900e+05 1.176600e-01 240 1 + 1 -9.800000e+04 4.922260e+05 8.765697e-02 6 1 + 40 4.882000e+04 1.083900e+05 1.165640e-01 240 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.176600e-01 +total time (s) : 1.165640e-01 total solves : 240 best bound : 1.083900e+05 -simulation ci : 9.274388e+04 ± 1.962777e+04 +simulation ci : 1.002754e+05 ± 2.174010e+04 numeric issues : 0 --------------------------------------------------------------------

Checking the policy

Birge and Louveaux report that the optimal objective value is $108,390. Check that we got the correct solution using SDDP.calculate_bound:

@assert isapprox(SDDP.calculate_bound(model), 108_390.0, atol = 0.1)
+-------------------------------------------------------------------

Checking the policy

Birge and Louveaux report that the optimal objective value is $108,390. Check that we got the correct solution using SDDP.calculate_bound:

@assert isapprox(SDDP.calculate_bound(model), 108_390.0, atol = 0.1)
diff --git a/previews/PR810/examples/vehicle_location/index.html b/previews/PR810/examples/vehicle_location/index.html index 4f1d97671..e2459c6df 100644 --- a/previews/PR810/examples/vehicle_location/index.html +++ b/previews/PR810/examples/vehicle_location/index.html @@ -108,4 +108,4 @@ end # TODO(odow): find out why this fails -# vehicle_location_model(SDDP.ContinuousConicDuality())
vehicle_location_model (generic function with 1 method)
+# vehicle_location_model(SDDP.ContinuousConicDuality())
vehicle_location_model (generic function with 1 method)
diff --git a/previews/PR810/explanation/risk/index.html b/previews/PR810/explanation/risk/index.html index d06d25916..a40048d0e 100644 --- a/previews/PR810/explanation/risk/index.html +++ b/previews/PR810/explanation/risk/index.html @@ -512,18 +512,18 @@ | | Visiting node 2 | | | Z = [1.0, 2.0, 3.0, 4.0] | | | p = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333] -| | | q = [0.8432391442060109, 0.07838042789699455, 0.07838042789699455] -| | | α = 0.5556945523744657 -| | | Adding cut : 126.48587163090163 volume_out + cost_to_go ≥ 18972.32505008287 +| | | q = [0.3903334174068186, 0.3048332912965907, 0.3048332912965907] +| | | α = 0.007126620949088093 +| | | Adding cut : 58.55001261102279 volume_out + cost_to_go ≥ 8782.49476503247 | | Visiting node 1 | | | Z = [1.0, 2.0, 3.0, 4.0] | | | p = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333] | | | q = [1.0, 0.0, 0.0] | | | α = 1.0986122886681098 -| | | Adding cut : 100 volume_out + cost_to_go ≥ 29998.641399238186 +| | | Adding cut : 100 volume_out + cost_to_go ≥ 29998.59466753869 | Finished iteration -| | lower_bound = 14998.641399238184 -Upper bound = 9999.704695711036 ± 934.3555588818597

Finally, evaluate the decision rule:

evaluate_policy(
+| | lower_bound = 14998.594667538693
+Upper bound = 10399.47052774895 ± 860.6342743551556

Finally, evaluate the decision rule:

evaluate_policy(
     model;
     node = 1,
     incoming_state = Dict(:volume => 150.0),
@@ -536,4 +536,4 @@
   :volume_in          => 150.0
   :thermal_generation => 125.0
   :hydro_generation   => 25.0
-  :cost_to_go         => 9998.64
Info

For this trivial example, the risk-averse policy isn't very different from the policy obtained using the expectation risk-measure. If you try it on some bigger/more interesting problems, you should see the expected cost increase, and the upper tail of the policy decrease.

+ :cost_to_go => 9998.59
Info

For this trivial example, the risk-averse policy isn't very different from the policy obtained using the expectation risk-measure. If you try it on some bigger/more interesting problems, you should see the expected cost increase, and the upper tail of the policy decrease.

diff --git a/previews/PR810/explanation/theory_intro/index.html b/previews/PR810/explanation/theory_intro/index.html index 46a9d583b..a20c2179c 100644 --- a/previews/PR810/explanation/theory_intro/index.html +++ b/previews/PR810/explanation/theory_intro/index.html @@ -201,8 +201,8 @@ end
sample_uncertainty (generic function with 1 method)
Note

rand() samples a uniform random variable in [0, 1).

For example:

for i in 1:3
     println("ω = ", sample_uncertainty(model.nodes[1].uncertainty))
 end
ω = 100.0
-ω = 0.0
-ω = 100.0

It's also going to be useful to define a function that generates a random walk through the nodes of the graph:

function sample_next_node(model::PolicyGraph, current::Int)
+ω = 100.0
+ω = 50.0

It's also going to be useful to define a function that generates a random walk through the nodes of the graph:

function sample_next_node(model::PolicyGraph, current::Int)
     if length(model.arcs[current]) == 0
         # No outgoing arcs!
         return nothing
@@ -275,15 +275,15 @@
     return trajectory, simulation_cost
 end
forward_pass (generic function with 2 methods)

Let's take a look at one forward pass:

trajectory, simulation_cost = forward_pass(model);
| Forward Pass
 | | Visiting node 1
-| | | ω = 50.0
+| | | ω = 0.0
 | | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 50.0)
 | | | C(x, u, ω) = 0.0
 | | Visiting node 2
-| | | ω = 50.0
-| | | x = Dict(:volume => 0.0)
+| | | ω = 100.0
+| | | x = Dict(:volume => 50.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 10000.0
+| | | C(x, u, ω) = 0.0
 | | Visiting node 3
 | | | ω = 100.0
 | | | x = Dict(:volume => 0.0)
@@ -382,20 +382,20 @@
 end
train (generic function with 1 method)

Using our model we defined earlier, we can go:

train(model; iteration_limit = 3, replications = 100)
Starting iteration 1
 | Forward Pass
 | | Visiting node 1
-| | | ω = 100.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 100.0)
 | | | C(x, u, ω) = 0.0
 | | Visiting node 2
-| | | ω = 0.0
-| | | x = Dict(:volume => 0.0)
+| | | ω = 50.0
+| | | x = Dict(:volume => 100.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 15000.0
+| | | C(x, u, ω) = 0.0
 | | Visiting node 3
-| | | ω = 0.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 0.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 22500.0
+| | | C(x, u, ω) = 15000.0
 | Backward pass
 | | Visiting node 3
 | | | Skipping node because the cost-to-go is 0
@@ -412,33 +412,33 @@
 | | | Adding cut : 150 volume_out + cost_to_go ≥ 15000
 | | Visiting node 1
 | | | Solving φ = 0.0
-| | | | V = 30000.0
-| | | | dVdx′ = Dict(:volume => -150.0)
+| | | | V = 15000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 22500.0
-| | | | dVdx′ = Dict(:volume => -150.0)
+| | | | V = 10000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 15000.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 22500
+| | | | V = 5000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Adding cut : 99.99999999999999 volume_out + cost_to_go ≥ 20000
 | Finished iteration
-| | lower_bound = 2500.0
+| | lower_bound = 5000.000000000002
 Starting iteration 2
 | Forward Pass
 | | Visiting node 1
-| | | ω = 50.0
+| | | ω = 100.0
 | | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 150.0)
-| | | C(x, u, ω) = 2500.0
+| | | x′ = Dict(:volume => 200.00000000000003)
+| | | C(x, u, ω) = 2500.0000000000014
 | | Visiting node 2
 | | | ω = 0.0
-| | | x = Dict(:volume => 150.0)
+| | | x = Dict(:volume => 200.00000000000003)
 | | | x′ = Dict(:volume => 100.0)
-| | | C(x, u, ω) = 10000.0
+| | | C(x, u, ω) = 4999.999999999997
 | | Visiting node 3
-| | | ω = 100.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 100.0)
-| | | x′ = Dict(:volume => 50.0)
+| | | x′ = Dict(:volume => 0.0)
 | | | C(x, u, ω) = 0.0
 | Backward pass
 | | Visiting node 3
@@ -456,33 +456,33 @@
 | | | Adding cut : 100 volume_out + cost_to_go ≥ 12500
 | | Visiting node 1
 | | | Solving φ = 0.0
-| | | | V = 12499.999999999998
+| | | | V = 7499.999999999995
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 7499.999999999998
+| | | | V = 2499.999999999996
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 2499.9999999999986
-| | | | dVdx′ = Dict(:volume => -100.0)
-| | | Adding cut : 99.99999999999999 volume_out + cost_to_go ≥ 22499.999999999996
+| | | | V = 0.0
+| | | | dVdx′ = Dict(:volume => 0.0)
+| | | Adding cut : 66.66666666666666 volume_out + cost_to_go ≥ 16666.666666666664
 | Finished iteration
-| | lower_bound = 7499.999999999998
+| | lower_bound = 8333.333333333332
 Starting iteration 3
 | Forward Pass
 | | Visiting node 1
-| | | ω = 0.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 200.0)
-| | | C(x, u, ω) = 7500.0
+| | | C(x, u, ω) = 4999.999999999998
 | | Visiting node 2
-| | | ω = 100.0
+| | | ω = 0.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 124.99999999999997)
-| | | C(x, u, ω) = 0.0
+| | | C(x, u, ω) = 7500.0
 | | Visiting node 3
 | | | ω = 50.0
 | | | x = Dict(:volume => 124.99999999999997)
-| | | x′ = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 24.99999999999997)
 | | | C(x, u, ω) = 0.0
 | Backward pass
 | | Visiting node 3
@@ -512,7 +512,7 @@
 | Finished iteration
 | | lower_bound = 8333.333333333332
 Termination status: iteration limit
-Upper bound = 8700.0 ± 951.0637107040632

Success! We trained a policy for a finite horizon multistage stochastic program using stochastic dual dynamic programming.

Implementation: evaluating the policy

A final step is the ability to evaluate the policy at a given point.

function evaluate_policy(
+Upper bound = 8375.0 ± 839.7274389450255

Success! We trained a policy for a finite horizon multistage stochastic program using stochastic dual dynamic programming.

Implementation: evaluating the policy

A final step is the ability to evaluate the policy at a given point.

function evaluate_policy(
     model::PolicyGraph;
     node::Int,
     incoming_state::Dict{Symbol,Float64},
@@ -556,7 +556,7 @@
 

Then, train a policy:

train(model; iteration_limit = 3, replications = 100)
Starting iteration 1
 | Forward Pass
 | | Visiting node 1
-| | | ω = 50.0
+| | | ω = 100.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 0.0)
 | | | C(x, u, ω) = 0.0
@@ -609,399 +609,463 @@
 Starting iteration 2
 | Forward Pass
 | | Visiting node 1
-| | | ω = 100.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 183.33333333333334)
-| | | C(x, u, ω) = 1666.6666666666672
+| | | C(x, u, ω) = 4166.666666666667
 | | Visiting node 2
 | | | ω = 50.0
 | | | x = Dict(:volume => 183.33333333333334)
 | | | x′ = Dict(:volume => 133.33333333333334)
 | | | C(x, u, ω) = 5000.0
 | | Visiting node 3
-| | | ω = 0.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 133.33333333333334)
-| | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 2499.999999999999
-| | Visiting node 2
-| | | ω = 0.0
-| | | x = Dict(:volume => 0.0)
-| | | x′ = Dict(:volume => -0.0)
-| | | C(x, u, ω) = 15000.0
-| | Visiting node 3
-| | | ω = 0.0
-| | | x = Dict(:volume => -0.0)
-| | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 22500.0
-| | Visiting node 2
-| | | ω = 0.0
-| | | x = Dict(:volume => 0.0)
-| | | x′ = Dict(:volume => -0.0)
-| | | C(x, u, ω) = 15000.0
-| | Visiting node 3
-| | | ω = 100.0
-| | | x = Dict(:volume => -0.0)
-| | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 7500.000000000001
+| | | x′ = Dict(:volume => 33.33333333333334)
+| | | C(x, u, ω) = 0.0
 | Backward pass
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 35000.0
+| | | | V = 30000.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 27500.0
+| | | | V = 22500.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 20000.0
+| | | | V = 15000.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Adding cut : 75 volume_out + cost_to_go ≥ 13750
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 36250.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Solving φ = 50.0
-| | | | V = 28750.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Solving φ = 100.0
-| | | | V = 21250.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 28749.999999999996
-| | Visiting node 3
-| | | Solving φ = 0.0
-| | | | V = 43750.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Solving φ = 50.0
-| | | | V = 36250.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Solving φ = 100.0
-| | | | V = 28749.999999999996
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 18125
-| | Visiting node 2
-| | | Solving φ = 0.0
-| | | | V = 40625.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Solving φ = 50.0
-| | | | V = 33125.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Solving φ = 100.0
-| | | | V = 25625.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 33125
-| | Visiting node 3
-| | | Solving φ = 0.0
-| | | | V = 48125.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Solving φ = 50.0
-| | | | V = 40625.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Solving φ = 100.0
-| | | | V = 33125.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 20312.5
-| | Visiting node 2
-| | | Solving φ = 0.0
-| | | | V = 22812.5
+| | | | V = 16249.999999999998
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 17812.5
+| | | | V = 11250.0
 | | | | dVdx′ = Dict(:volume => -75.0)
 | | | Solving φ = 100.0
-| | | | V = 14062.5
+| | | | V = 7499.999999999999
 | | | | dVdx′ = Dict(:volume => -75.0)
-| | | Adding cut : 100 volume_out + cost_to_go ≥ 31562.499999999996
+| | | Adding cut : 100 volume_out + cost_to_go ≥ 24999.999999999996
 | | Visiting node 1
 | | | Solving φ = 0.0
-| | | | V = 28229.166666666657
+| | | | V = 21666.666666666657
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 23229.166666666664
+| | | | V = 16666.666666666664
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 18229.166666666664
+| | | | V = 11666.666666666666
 | | | | dVdx′ = Dict(:volume => -100.0)
-| | | Adding cut : 99.99999999999999 volume_out + cost_to_go ≥ 41562.5
+| | | Adding cut : 99.99999999999999 volume_out + cost_to_go ≥ 35000
 | Finished iteration
-| | lower_bound = 26562.499999999996
+| | lower_bound = 20000.0
 Starting iteration 3
 | Forward Pass
 | | Visiting node 1
 | | | ω = 0.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 200.0)
-| | | C(x, u, ω) = 7500.0
+| | | C(x, u, ω) = 7499.999999999998
 | | Visiting node 2
 | | | ω = 50.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 200.0)
 | | | C(x, u, ω) = 10000.0
 | | Visiting node 3
+| | | ω = 100.0
+| | | x = Dict(:volume => 200.0)
+| | | x′ = Dict(:volume => 150.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 150.0)
+| | | x′ = Dict(:volume => 200.0)
+| | | C(x, u, ω) = 10000.0
+| | Visiting node 3
 | | | ω = 0.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 50.0)
 | | | C(x, u, ω) = 0.0
 | | Visiting node 2
-| | | ω = 0.0
+| | | ω = 100.0
 | | | x = Dict(:volume => 50.0)
-| | | x′ = Dict(:volume => 50.0)
+| | | x′ = Dict(:volume => 150.0)
 | | | C(x, u, ω) = 15000.0
 | | Visiting node 3
 | | | ω = 100.0
-| | | x = Dict(:volume => 50.0)
-| | | x′ = Dict(:volume => -0.0)
+| | | x = Dict(:volume => 150.0)
+| | | x′ = Dict(:volume => 100.0)
 | | | C(x, u, ω) = 0.0
 | | Visiting node 2
-| | | ω = 50.0
-| | | x = Dict(:volume => -0.0)
-| | | x′ = Dict(:volume => 50.0)
+| | | ω = 100.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 200.0)
 | | | C(x, u, ω) = 15000.0
 | | Visiting node 3
 | | | ω = 0.0
-| | | x = Dict(:volume => 50.0)
-| | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 15000.0
+| | | x = Dict(:volume => 200.0)
+| | | x′ = Dict(:volume => 50.0)
+| | | C(x, u, ω) = 0.0
 | | Visiting node 2
-| | | ω = 50.0
-| | | x = Dict(:volume => 0.0)
+| | | ω = 0.0
+| | | x = Dict(:volume => 50.0)
 | | | x′ = Dict(:volume => 50.0)
-| | | C(x, u, ω) = 15000.0
+| | | C(x, u, ω) = 15000.000000000004
 | | Visiting node 3
 | | | ω = 50.0
 | | | x = Dict(:volume => 50.0)
 | | | x′ = Dict(:volume => 0.0)
 | | | C(x, u, ω) = 7500.0
 | | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 100.0)
+| | | C(x, u, ω) = 15000.0
+| | Visiting node 3
 | | | ω = 50.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 100.0
 | | | x = Dict(:volume => 0.0)
-| | | x′ = Dict(:volume => 50.0)
+| | | x′ = Dict(:volume => 100.0)
 | | | C(x, u, ω) = 15000.0
 | | Visiting node 3
+| | | ω = 50.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
 | | | ω = 0.0
-| | | x = Dict(:volume => 50.0)
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => -0.0)
+| | | C(x, u, ω) = 15000.000000000004
+| | Visiting node 3
+| | | ω = 100.0
+| | | x = Dict(:volume => -0.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 15000.0
+| | | C(x, u, ω) = 7500.0
 | | Visiting node 2
-| | | ω = 50.0
+| | | ω = 100.0
 | | | x = Dict(:volume => 0.0)
-| | | x′ = Dict(:volume => 50.0)
+| | | x′ = Dict(:volume => 100.0)
 | | | C(x, u, ω) = 15000.0
 | | Visiting node 3
+| | | ω = 50.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
 | | | ω = 0.0
-| | | x = Dict(:volume => 50.0)
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => -0.0)
+| | | C(x, u, ω) = 15000.000000000004
+| | Visiting node 3
+| | | ω = 50.0
+| | | x = Dict(:volume => -0.0)
 | | | x′ = Dict(:volume => 0.0)
 | | | C(x, u, ω) = 15000.0
 | | Visiting node 2
-| | | ω = 50.0
+| | | ω = 0.0
 | | | x = Dict(:volume => 0.0)
-| | | x′ = Dict(:volume => 50.0)
-| | | C(x, u, ω) = 15000.0
+| | | x′ = Dict(:volume => -0.0)
+| | | C(x, u, ω) = 15000.000000000004
 | | Visiting node 3
 | | | ω = 50.0
-| | | x = Dict(:volume => 50.0)
+| | | x = Dict(:volume => -0.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 7500.0
+| | | C(x, u, ω) = 15000.0
 | | Visiting node 2
-| | | ω = 50.0
+| | | ω = 100.0
 | | | x = Dict(:volume => 0.0)
-| | | x′ = Dict(:volume => 50.0)
+| | | x′ = Dict(:volume => 100.0)
 | | | C(x, u, ω) = 15000.0
 | | Visiting node 3
 | | | ω = 0.0
-| | | x = Dict(:volume => 50.0)
+| | | x = Dict(:volume => 100.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 15000.0
+| | | C(x, u, ω) = 7500.0
 | Backward pass
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 48125.0
-| | | | dVdx′ = Dict(:volume => -150.0)
+| | | | V = 40000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 41562.5
+| | | | V = 35000.0
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 36562.5
+| | | | V = 29999.999999999996
 | | | | dVdx′ = Dict(:volume => -100.0)
-| | | Adding cut : 58.33333333333333 volume_out + cost_to_go ≥ 21041.666666666664
+| | | Adding cut : 49.99999999999999 volume_out + cost_to_go ≥ 17500
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 36041.666666666664
+| | | | V = 25000.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 28541.666666666664
+| | | | V = 17500.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 21041.666666666664
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 36041.66666666666
+| | | | V = 15000.0
+| | | | dVdx′ = Dict(:volume => -49.99999999999999)
+| | | Adding cut : 116.66666666666666 volume_out + cost_to_go ≥ 30833.33333333333
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 51041.66666666666
-| | | | dVdx′ = Dict(:volume => -150.0)
+| | | | V = 45833.33333333333
+| | | | dVdx′ = Dict(:volume => -116.66666666666666)
 | | | Solving φ = 50.0
-| | | | V = 43541.66666666666
-| | | | dVdx′ = Dict(:volume => -150.0)
+| | | | V = 40000.0
+| | | | dVdx′ = Dict(:volume => -116.66666666666666)
 | | | Solving φ = 100.0
-| | | | V = 36562.5
-| | | | dVdx′ = Dict(:volume => -100.0)
-| | | Adding cut : 66.66666666666666 volume_out + cost_to_go ≥ 21857.638888888883
+| | | | V = 34166.666666666664
+| | | | dVdx′ = Dict(:volume => -116.66666666666666)
+| | | Adding cut : 58.33333333333333 volume_out + cost_to_go ≥ 20000
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 36857.63888888888
+| | | | V = 42500.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 29357.638888888883
+| | | | V = 35000.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 21857.638888888883
+| | | | V = 27500.0
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 36857.63888888888
+| | | Adding cut : 150 volume_out + cost_to_go ≥ 35000
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 51857.63888888888
+| | | | V = 50000.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 44357.63888888888
+| | | | V = 42500.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 36857.63888888888
+| | | | V = 35000.0
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 22178.81944444444
+| | | Adding cut : 75 volume_out + cost_to_go ≥ 21249.999999999996
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 37178.81944444444
+| | | | V = 43750.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 29678.81944444444
+| | | | V = 36250.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 22178.81944444444
+| | | | V = 28749.999999999996
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 37178.81944444444
+| | | Adding cut : 150 volume_out + cost_to_go ≥ 36250
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 52178.81944444444
+| | | | V = 51250.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 44678.81944444444
+| | | | V = 43750.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 37178.81944444444
+| | | | V = 36250.0
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 22339.409722222215
+| | | Adding cut : 75 volume_out + cost_to_go ≥ 21875
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 37339.40972222222
+| | | | V = 29375.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 29839.409722222215
+| | | | V = 21875.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 22339.409722222215
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 37339.40972222222
+| | | | V = 18125.0
+| | | | dVdx′ = Dict(:volume => -75.0)
+| | | Adding cut : 125 volume_out + cost_to_go ≥ 35625
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 52339.40972222222
+| | | | V = 51250.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 44839.40972222222
-| | | | dVdx′ = Dict(:volume => -150.0)
+| | | | V = 44375.0
+| | | | dVdx′ = Dict(:volume => -125.0)
 | | | Solving φ = 100.0
-| | | | V = 37339.40972222222
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 22419.70486111111
+| | | | V = 38125.0
+| | | | dVdx′ = Dict(:volume => -125.0)
+| | | Adding cut : 66.66666666666666 volume_out + cost_to_go ≥ 22291.666666666664
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 37419.70486111111
+| | | | V = 44791.666666666664
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 29919.70486111111
+| | | | V = 37291.666666666664
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 22419.70486111111
+| | | | V = 29791.666666666664
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 37419.70486111111
+| | | Adding cut : 150 volume_out + cost_to_go ≥ 37291.666666666664
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 52419.70486111111
+| | | | V = 52291.666666666664
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 44919.70486111111
+| | | | V = 44791.666666666664
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 37419.70486111111
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 22459.85243055555
+| | | | V = 38125.0
+| | | | dVdx′ = Dict(:volume => -125.0)
+| | | Adding cut : 70.83333333333333 volume_out + cost_to_go ≥ 22534.72222222222
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 37459.85243055555
+| | | | V = 30034.72222222222
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 29959.85243055555
+| | | | V = 22534.72222222222
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 22459.85243055555
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 37459.85243055555
+| | | | V = 18993.055555555555
+| | | | dVdx′ = Dict(:volume => -70.83333333333333)
+| | | Adding cut : 123.61111111111111 volume_out + cost_to_go ≥ 36215.277777777774
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 52459.85243055555
+| | | | V = 52291.666666666664
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 45034.72222222222
+| | | | dVdx′ = Dict(:volume => -123.61111111111111)
+| | | Solving φ = 100.0
+| | | | V = 38854.166666666664
+| | | | dVdx′ = Dict(:volume => -123.61111111111111)
+| | | Adding cut : 66.2037037037037 volume_out + cost_to_go ≥ 22696.759259259255
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 30196.759259259255
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 44959.85243055555
+| | | | V = 22696.759259259255
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 37459.85243055555
+| | | | V = 19386.57407407407
+| | | | dVdx′ = Dict(:volume => -66.2037037037037)
+| | | Adding cut : 122.0679012345679 volume_out + cost_to_go ≥ 36300.15432098765
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 52291.666666666664
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 22479.92621527777
+| | | Solving φ = 50.0
+| | | | V = 45196.759259259255
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Solving φ = 100.0
+| | | | V = 39093.36419753086
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Adding cut : 65.68930041152262 volume_out + cost_to_go ≥ 22763.631687242792
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 37479.92621527777
+| | | | V = 37763.63168724279
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 29979.92621527777
+| | | | V = 30263.631687242792
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 22479.92621527777
+| | | | V = 22763.631687242792
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 37479.92621527777
+| | | Adding cut : 150 volume_out + cost_to_go ≥ 37763.63168724279
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 44979.92621527777
+| | | | V = 45263.631687242785
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 37479.92621527777
+| | | | V = 39093.36419753086
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Solving φ = 100.0
+| | | | V = 32989.96913580246
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Adding cut : 65.68930041152262 volume_out + cost_to_go ≥ 22842.292524005483
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 19557.82750342935
+| | | | dVdx′ = Dict(:volume => -65.68930041152262)
+| | | Solving φ = 50.0
+| | | | V = 16273.36248285322
+| | | | dVdx′ = Dict(:volume => -65.68930041152262)
+| | | Solving φ = 100.0
+| | | | V = 12988.89746227709
+| | | | dVdx′ = Dict(:volume => -65.68930041152262)
+| | | Adding cut : 65.68930041152262 volume_out + cost_to_go ≥ 29411.222565157746
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 39093.36419753086
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Solving φ = 50.0
+| | | | V = 33603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.00000000000001)
+| | | Solving φ = 100.0
+| | | | V = 28603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.00000000000001)
+| | | Adding cut : 53.677983539094654 volume_out + cost_to_go ≥ 22251.247560964923
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 22842.292524005483
+| | | | dVdx′ = Dict(:volume => -65.68930041152262)
+| | | Solving φ = 50.0
+| | | | V = 19567.34838401019
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
+| | | Solving φ = 100.0
+| | | | V = 16883.449207055455
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
+| | | Adding cut : 57.68175582990397 volume_out + cost_to_go ≥ 28416.62674617597
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 45263.631687242785
 | | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 39093.36419753085
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
 | | | Solving φ = 100.0
-| | | | V = 31562.5
+| | | | V = 33603.665522400945
 | | | | dVdx′ = Dict(:volume => -100.0)
-| | | Adding cut : 66.66666666666666 volume_out + cost_to_go ≥ 22337.05873842592
+| | | Adding cut : 62.011316872427976 volume_out + cost_to_go ≥ 22760.676078150493
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 19003.725405092588
-| | | | dVdx′ = Dict(:volume => -66.66666666666666)
+| | | | V = 19660.110234529096
+| | | | dVdx′ = Dict(:volume => -62.011316872427976)
 | | | Solving φ = 50.0
-| | | | V = 15670.392071759255
-| | | | dVdx′ = Dict(:volume => -66.66666666666666)
+| | | | V = 16883.449207055455
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
 | | | Solving φ = 100.0
-| | | | V = 12337.05873842592
-| | | | dVdx′ = Dict(:volume => -66.66666666666666)
-| | | Adding cut : 66.66666666666666 volume_out + cost_to_go ≥ 29003.725405092584
+| | | | V = 14199.550030100723
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
+| | | Adding cut : 56.45576131687242 volume_out + cost_to_go ≥ 28205.522087269575
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 33603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Solving φ = 50.0
+| | | | V = 28603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Solving φ = 100.0
+| | | | V = 23603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Adding cut : 49.99999999999999 volume_out + cost_to_go ≥ 21801.832761200472
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 19660.110234529096
+| | | | dVdx′ = Dict(:volume => -62.011316872427976)
+| | | Solving φ = 50.0
+| | | | V = 16883.449207055455
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
+| | | Solving φ = 100.0
+| | | | V = 14301.832761200472
+| | | | dVdx′ = Dict(:volume => -49.99999999999999)
+| | | Adding cut : 55.22976680384087 volume_out + cost_to_go ≥ 27994.41742836318
 | | Visiting node 1
 | | | Solving φ = 0.0
-| | | | V = 27394.20572916665
-| | | | dVdx′ = Dict(:volume => -99.99999999999999)
+| | | | V = 28603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 22394.20572916665
-| | | | dVdx′ = Dict(:volume => -99.99999999999999)
+| | | | V = 23603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 19003.72540509258
-| | | | dVdx′ = Dict(:volume => -66.66666666666666)
-| | | Adding cut : 88.88888888888887 volume_out + cost_to_go ≥ 40708.4900655864
+| | | | V = 19764.363371690368
+| | | | dVdx′ = Dict(:volume => -57.68175582990398)
+| | | Adding cut : 85.89391860996798 volume_out + cost_to_go ≥ 41169.34852749102
 | Finished iteration
-| | lower_bound = 27930.712287808623
+| | lower_bound = 28990.56480549742
 Termination status: iteration limit
-Upper bound = 34702.57161458332 ± 7273.390632691517

Success! We trained a policy for an infinite horizon multistage stochastic program using stochastic dual dynamic programming. Note how some of the forward passes are different lengths!

evaluate_policy(
+Upper bound = 33033.000684306564 ± 7619.076388304794

Success! We trained a policy for an infinite horizon multistage stochastic program using stochastic dual dynamic programming. Note how some of the forward passes are different lengths!

evaluate_policy(
     model;
     node = 3,
     incoming_state = Dict(:volume => 100.0),
@@ -1014,4 +1078,4 @@
   :volume_in          => 100.0
   :thermal_generation => 40.0
   :hydro_generation   => 110.0
-  :cost_to_go         => 22479.9
+ :cost_to_go => 22842.3 diff --git a/previews/PR810/guides/access_previous_variables/index.html b/previews/PR810/guides/access_previous_variables/index.html index 90ad1bd56..f13922889 100644 --- a/previews/PR810/guides/access_previous_variables/index.html +++ b/previews/PR810/guides/access_previous_variables/index.html @@ -98,4 +98,4 @@ end end endA policy graph with 20 nodes. - Node indices: 1, ..., 20 + Node indices: 1, ..., 20 diff --git a/previews/PR810/guides/add_a_multidimensional_state_variable/index.html b/previews/PR810/guides/add_a_multidimensional_state_variable/index.html index e52ec34a4..6bdea6cba 100644 --- a/previews/PR810/guides/add_a_multidimensional_state_variable/index.html +++ b/previews/PR810/guides/add_a_multidimensional_state_variable/index.html @@ -19,4 +19,4 @@ end; Lower bound of outgoing x is: 0.0 Lower bound of outgoing y[1] is: 1.0 -Lower bound of outgoing z[3, :B] is: 3.0 +Lower bound of outgoing z[3, :B] is: 3.0 diff --git a/previews/PR810/guides/add_a_risk_measure/index.html b/previews/PR810/guides/add_a_risk_measure/index.html index 638c87aed..92ee103d8 100644 --- a/previews/PR810/guides/add_a_risk_measure/index.html +++ b/previews/PR810/guides/add_a_risk_measure/index.html @@ -40,7 +40,7 @@ 0.0 0.0 0.0 - 0.0

Expectation

SDDP.ExpectationType
Expectation()

The Expectation risk measure. Identical to taking the expectation with respect to the nominal distribution.

source
julia> using SDDP
julia> SDDP.adjust_probability( + 0.0

Expectation

SDDP.ExpectationType
Expectation()

The Expectation risk measure. Identical to taking the expectation with respect to the nominal distribution.

source
julia> using SDDP
julia> SDDP.adjust_probability( SDDP.Expectation(), risk_adjusted_probability, nominal_probability, @@ -51,7 +51,7 @@ 0.1 0.2 0.3 - 0.4

SDDP.Expectation is the default risk measure in SDDP.jl.

Worst-case

SDDP.WorstCaseType
WorstCase()

The worst-case risk measure. Places all of the probability weight on the worst outcome.

source
julia> SDDP.adjust_probability(
+ 0.4

SDDP.Expectation is the default risk measure in SDDP.jl.

Worst-case

SDDP.WorstCaseType
WorstCase()

The worst-case risk measure. Places all of the probability weight on the worst outcome.

source
julia> SDDP.adjust_probability(
            SDDP.WorstCase(),
            risk_adjusted_probability,
            nominal_probability,
@@ -62,7 +62,7 @@
  0.0
  0.0
  1.0
- 0.0

Average value at risk (AV@R)

SDDP.AVaRType
AVaR(β)

The average value at risk (AV@R) risk measure.

Computes the expectation of the β fraction of worst outcomes. β must be in [0, 1]. When β=1, this is equivalent to the Expectation risk measure. When β=0, this is equivalent to the WorstCase risk measure.

AV@R is also known as the conditional value at risk (CV@R) or expected shortfall.

source
julia> SDDP.adjust_probability(
+ 0.0

Average value at risk (AV@R)

SDDP.AVaRType
AVaR(β)

The average value at risk (AV@R) risk measure.

Computes the expectation of the β fraction of worst outcomes. β must be in [0, 1]. When β=1, this is equivalent to the Expectation risk measure. When β=0, this is equivalent to the WorstCase risk measure.

AV@R is also known as the conditional value at risk (CV@R) or expected shortfall.

source
julia> SDDP.adjust_probability(
            SDDP.AVaR(0.5),
            risk_adjusted_probability,
            nominal_probability,
@@ -84,10 +84,10 @@
  0.05
  0.1
  0.65
- 0.2

As a special case, the SDDP.EAVaR risk-measure is a convex combination of SDDP.Expectation and SDDP.AVaR:

julia> SDDP.EAVaR(beta=0.25, lambda=0.4)A convex combination of 0.4 * SDDP.Expectation() + 0.6 * SDDP.AVaR(0.25)
SDDP.EAVaRFunction
EAVaR(;lambda=1.0, beta=1.0)

A risk measure that is a convex combination of Expectation and Average Value @ Risk (also called Conditional Value @ Risk).

    λ * E[x] + (1 - λ) * AV@R(β)[x]

Keyword Arguments

  • lambda: Convex weight on the expectation ((1-lambda) weight is put on the AV@R component. Inreasing values of lambda are less risk averse (more weight on expectation).

  • beta: The quantile at which to calculate the Average Value @ Risk. Increasing values of beta are less risk averse. If beta=0, then the AV@R component is the worst case risk measure.

source

Distributionally robust

SDDP.jl supports two types of distributionally robust risk measures: the modified Χ² method of Philpott et al. (2018), and a method based on the Wasserstein distance metric.

Modified Chi-squard

SDDP.ModifiedChiSquaredType
ModifiedChiSquared(radius::Float64; minimum_std=1e-5)

The distributionally robust SDDP risk measure of Philpott, A., de Matos, V., Kapelevich, L. Distributionally robust SDDP. Computational Management Science (2018) 165:431-454.

Explanation

In a Distributionally Robust Optimization (DRO) approach, we modify the probabilities we associate with all future scenarios so that the resulting probability distribution is the "worst case" probability distribution, in some sense.

In each backward pass we will compute a worst case probability distribution vector p. We compute p so that:

p ∈ argmax p'z
+ 0.2

As a special case, the SDDP.EAVaR risk-measure is a convex combination of SDDP.Expectation and SDDP.AVaR:

julia> SDDP.EAVaR(beta=0.25, lambda=0.4)A convex combination of 0.4 * SDDP.Expectation() + 0.6 * SDDP.AVaR(0.25)
SDDP.EAVaRFunction
EAVaR(;lambda=1.0, beta=1.0)

A risk measure that is a convex combination of Expectation and Average Value @ Risk (also called Conditional Value @ Risk).

    λ * E[x] + (1 - λ) * AV@R(β)[x]

Keyword Arguments

  • lambda: Convex weight on the expectation ((1-lambda) weight is put on the AV@R component. Inreasing values of lambda are less risk averse (more weight on expectation).

  • beta: The quantile at which to calculate the Average Value @ Risk. Increasing values of beta are less risk averse. If beta=0, then the AV@R component is the worst case risk measure.

source

Distributionally robust

SDDP.jl supports two types of distributionally robust risk measures: the modified Χ² method of Philpott et al. (2018), and a method based on the Wasserstein distance metric.

Modified Chi-squard

SDDP.ModifiedChiSquaredType
ModifiedChiSquared(radius::Float64; minimum_std=1e-5)

The distributionally robust SDDP risk measure of Philpott, A., de Matos, V., Kapelevich, L. Distributionally robust SDDP. Computational Management Science (2018) 165:431-454.

Explanation

In a Distributionally Robust Optimization (DRO) approach, we modify the probabilities we associate with all future scenarios so that the resulting probability distribution is the "worst case" probability distribution, in some sense.

In each backward pass we will compute a worst case probability distribution vector p. We compute p so that:

p ∈ argmax p'z
       s.t. [r; p - a] in SecondOrderCone()
            sum(p) == 1
-           p >= 0

where

  1. z is a vector of future costs. We assume that our aim is to minimize future cost p'z. If we maximize reward, we would have p ∈ argmin{p'z}.
  2. a is the uniform distribution
  3. r is a user specified radius - the larger the radius, the more conservative the policy.

Notes

The largest radius that will work with S scenarios is sqrt((S-1)/S).

If the uncorrected standard deviation of the objecive realizations is less than minimum_std, then the risk-measure will default to Expectation().

This code was contributed by Lea Kapelevich.

source
julia> SDDP.adjust_probability(
+           p >= 0

where

  1. z is a vector of future costs. We assume that our aim is to minimize future cost p'z. If we maximize reward, we would have p ∈ argmin{p'z}.
  2. a is the uniform distribution
  3. r is a user specified radius - the larger the radius, the more conservative the policy.

Notes

The largest radius that will work with S scenarios is sqrt((S-1)/S).

If the uncorrected standard deviation of the objecive realizations is less than minimum_std, then the risk-measure will default to Expectation().

This code was contributed by Lea Kapelevich.

source
julia> SDDP.adjust_probability(
            SDDP.ModifiedChiSquared(0.5),
            risk_adjusted_probability,
            [0.25, 0.25, 0.25, 0.25],
@@ -98,7 +98,7 @@
  0.3333333333333333
  0.044658198738520394
  0.6220084679281462
- 0.0

Wasserstein

SDDP.WassersteinType
Wasserstein(norm::Function, solver_factory; alpha::Float64)

A distributionally-robust risk measure based on the Wasserstein distance.

As alpha increases, the measure becomes more risk-averse. When alpha=0, the measure is equivalent to the expectation operator. As alpha increases, the measure approaches the Worst-case risk measure.

source
julia> import HiGHS
julia> SDDP.adjust_probability( + 0.0

Wasserstein

SDDP.WassersteinType
Wasserstein(norm::Function, solver_factory; alpha::Float64)

A distributionally-robust risk measure based on the Wasserstein distance.

As alpha increases, the measure becomes more risk-averse. When alpha=0, the measure is equivalent to the expectation operator. As alpha increases, the measure approaches the Worst-case risk measure.

source
julia> import HiGHS
julia> SDDP.adjust_probability( SDDP.Wasserstein(HiGHS.Optimizer; alpha=0.5) do x, y return abs(x - y) end, @@ -113,7 +113,7 @@ 0.7999999999999999 -0.0

Entropic

SDDP.EntropicType
Entropic(γ::Float64)

The entropic risk measure as described by:

Dowson, O., Morton, D.P. & Pagnoncelli, B.K. Incorporating convex risk
 measures into multistage stochastic programming algorithms. Annals of
-Operations Research (2022). [doi](https://doi.org/10.1007/s10479-022-04977-w).

As γ increases, the measure becomes more risk-averse.

source
julia> SDDP.adjust_probability(
+Operations Research (2022). [doi](https://doi.org/10.1007/s10479-022-04977-w).

As γ increases, the measure becomes more risk-averse.

source
julia> SDDP.adjust_probability(
            SDDP.Entropic(0.1),
            risk_adjusted_probability,
            nominal_probability,
@@ -124,4 +124,4 @@
  0.1100296362588547
  0.19911786395979578
  0.3648046623591841
- 0.3260478374221655
+ 0.3260478374221655 diff --git a/previews/PR810/guides/add_integrality/index.html b/previews/PR810/guides/add_integrality/index.html index 3d90d7b1f..4d064abaa 100644 --- a/previews/PR810/guides/add_integrality/index.html +++ b/previews/PR810/guides/add_integrality/index.html @@ -25,4 +25,4 @@ \max\limits_{\lambda}\min\limits_{\bar{x}, x^\prime, u} \;\; & C_i(\bar{x}, u, \omega) + \mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)] - \lambda^\top(\bar{x} - x)\\ & x^\prime = T_i(\bar{x}, u, \omega) \\ & u \in U_i(\bar{x}, \omega) -\end{aligned}\]

You can use Lagrangian duality in SDDP.jl by passing SDDP.LagrangianDuality to the duality_handler argument of SDDP.train.

Compared with linear programming duality, the Lagrangian problem is difficult to solve because it requires the solution of many mixed-integer programs instead of a single linear program. This is one reason why "SDDiP" has poor performance.

Convergence

The second part to SDDiP is a very tightly scoped claim: if all of the state variables are binary and the algorithm uses Lagrangian duality to compute a subgradient, then it will converge to an optimal policy.

In many cases, papers claim to "do SDDiP," but they have state variables which are not binary. In these cases, the algorithm is not guaranteed to converge to a globally optimal policy.

One work-around that has been suggested is to discretize the state variables into a set of binary state variables. However, this leads to a large number of binary state variables, which is another reason why "SDDiP" has poor performance.

In general, we recommend that you introduce integer variables into your model without fear of the consequences, and that you treat the resulting policy as a good heuristic, rather than an attempt to find a globally optimal policy.

+\end{aligned}\]

You can use Lagrangian duality in SDDP.jl by passing SDDP.LagrangianDuality to the duality_handler argument of SDDP.train.

Compared with linear programming duality, the Lagrangian problem is difficult to solve because it requires the solution of many mixed-integer programs instead of a single linear program. This is one reason why "SDDiP" has poor performance.

Convergence

The second part to SDDiP is a very tightly scoped claim: if all of the state variables are binary and the algorithm uses Lagrangian duality to compute a subgradient, then it will converge to an optimal policy.

In many cases, papers claim to "do SDDiP," but they have state variables which are not binary. In these cases, the algorithm is not guaranteed to converge to a globally optimal policy.

One work-around that has been suggested is to discretize the state variables into a set of binary state variables. However, this leads to a large number of binary state variables, which is another reason why "SDDiP" has poor performance.

In general, we recommend that you introduce integer variables into your model without fear of the consequences, and that you treat the resulting policy as a good heuristic, rather than an attempt to find a globally optimal policy.

diff --git a/previews/PR810/guides/add_multidimensional_noise/index.html b/previews/PR810/guides/add_multidimensional_noise/index.html index cabb13c96..3ecf44caf 100644 --- a/previews/PR810/guides/add_multidimensional_noise/index.html +++ b/previews/PR810/guides/add_multidimensional_noise/index.html @@ -81,4 +81,4 @@ julia> SDDP.simulate(model, 1); ω is: [54, 38, 19] ω is: [43, 3, 13] -ω is: [43, 4, 17] +ω is: [43, 4, 17] diff --git a/previews/PR810/guides/add_noise_in_the_constraint_matrix/index.html b/previews/PR810/guides/add_noise_in_the_constraint_matrix/index.html index d14b0d9f6..91497b313 100644 --- a/previews/PR810/guides/add_noise_in_the_constraint_matrix/index.html +++ b/previews/PR810/guides/add_noise_in_the_constraint_matrix/index.html @@ -20,4 +20,4 @@ julia> SDDP.simulate(model, 1); emissions : x_out <= 1 emissions : 0.2 x_out <= 1 -emissions : 0.5 x_out <= 1
Note

JuMP will normalize constraints by moving all variables to the left-hand side. Thus, @constraint(model, 0 <= 1 - x.out) becomes x.out <= 1. JuMP.set_normalized_coefficient sets the coefficient on the normalized constraint.

+emissions : 0.5 x_out <= 1
Note

JuMP will normalize constraints by moving all variables to the left-hand side. Thus, @constraint(model, 0 <= 1 - x.out) becomes x.out <= 1. JuMP.set_normalized_coefficient sets the coefficient on the normalized constraint.

diff --git a/previews/PR810/guides/choose_a_stopping_rule/index.html b/previews/PR810/guides/choose_a_stopping_rule/index.html index 0004458ab..2865d1aaf 100644 --- a/previews/PR810/guides/choose_a_stopping_rule/index.html +++ b/previews/PR810/guides/choose_a_stopping_rule/index.html @@ -21,4 +21,4 @@ stopping_rules = [ SDDP.StoppingChain(SDDP.BoundStalling(10, 1e-4), SDDP.TimeLimit(100.0)), ], -)

See Stopping rules for a list of stopping rules supported by SDDP.jl.

+)

See Stopping rules for a list of stopping rules supported by SDDP.jl.

diff --git a/previews/PR810/guides/create_a_belief_state/index.html b/previews/PR810/guides/create_a_belief_state/index.html index 637f780b2..fdff410f8 100644 --- a/previews/PR810/guides/create_a_belief_state/index.html +++ b/previews/PR810/guides/create_a_belief_state/index.html @@ -34,4 +34,4 @@ (1, 2) => (2, 2) w.p. 0.2 Partitions {(1, 1), (1, 2)} - {(2, 1), (2, 2)} + {(2, 1), (2, 2)} diff --git a/previews/PR810/guides/create_a_general_policy_graph/index.html b/previews/PR810/guides/create_a_general_policy_graph/index.html index 7ee0ffd34..271683c50 100644 --- a/previews/PR810/guides/create_a_general_policy_graph/index.html +++ b/previews/PR810/guides/create_a_general_policy_graph/index.html @@ -110,4 +110,4 @@ @variable(subproblem, x >= 0, SDDP.State, initial_value = 1) @constraint(subproblem, x.out <= x.in) @stageobjective(subproblem, price * x.out) -end +end diff --git a/previews/PR810/guides/debug_a_model/index.html b/previews/PR810/guides/debug_a_model/index.html index 27c376a71..eb36d4254 100644 --- a/previews/PR810/guides/debug_a_model/index.html +++ b/previews/PR810/guides/debug_a_model/index.html @@ -68,4 +68,4 @@ julia> optimize!(det_equiv) julia> objective_value(det_equiv) --5.472500000000001
Warning

The deterministic equivalent scales poorly with problem size. Only use this on small problems!

+-5.472500000000001
Warning

The deterministic equivalent scales poorly with problem size. Only use this on small problems!

diff --git a/previews/PR810/guides/improve_computational_performance/index.html b/previews/PR810/guides/improve_computational_performance/index.html index 7d5db4c45..cf39611c3 100644 --- a/previews/PR810/guides/improve_computational_performance/index.html +++ b/previews/PR810/guides/improve_computational_performance/index.html @@ -45,4 +45,4 @@ env = Gurobi.Env() set_optimizer(m, () -> Gurobi.Optimizer(env)) end, -) +) diff --git a/previews/PR810/guides/simulate_using_a_different_sampling_scheme/index.html b/previews/PR810/guides/simulate_using_a_different_sampling_scheme/index.html index 2087cc24b..ccd84371d 100644 --- a/previews/PR810/guides/simulate_using_a_different_sampling_scheme/index.html +++ b/previews/PR810/guides/simulate_using_a_different_sampling_scheme/index.html @@ -165,4 +165,4 @@ ], [0.3, 0.7], ) -A Historical sampler with 2 scenarios sampled probabilistically.
Tip

Your sample space doesn't have to be a NamedTuple. It an be any Julia type! Use a Vector if that is easier, or define your own struct.

+A Historical sampler with 2 scenarios sampled probabilistically.
Tip

Your sample space doesn't have to be a NamedTuple. It an be any Julia type! Use a Vector if that is easier, or define your own struct.

diff --git a/previews/PR810/index.html b/previews/PR810/index.html index 10b9eca36..cc9a18a32 100644 --- a/previews/PR810/index.html +++ b/previews/PR810/index.html @@ -47,4 +47,4 @@ journal = {Annals of Operations Research}, author = {Dowson, O. and Morton, D.P. and Pagnoncelli, B.K.}, year = {2022}, -}

Here is an earlier preprint.

+}

Here is an earlier preprint.

diff --git a/previews/PR810/release_notes/index.html b/previews/PR810/release_notes/index.html index 96f91629b..e42f40c85 100644 --- a/previews/PR810/release_notes/index.html +++ b/previews/PR810/release_notes/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Release notes

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

v1.10.1 (November 28, 2024)

Fixed

Other

  • Documentation updates (#801)

v1.10.0 (November 19, 2024)

Added

  • Added root_node_risk_measure keyword to train (#804)

Fixed

  • Fixed a bug with cut sharing in a graph with zero-probability arcs (#797)

Other

v1.9.0 (October 17, 2024)

Added

Fixed

  • Fixed the tests to skip threading tests if running in serial (#770)
  • Fixed BanditDuality to handle the case where the standard deviation is NaN (#779)
  • Fixed an error when lagged state variables are encountered in MSPFormat (#786)
  • Fixed publication_plot with replications of different lengths (#788)
  • Fixed CTRL+C interrupting the code at unsafe points (#789)

Other

  • Documentation improvements (#771) (#772)
  • Updated printing because of changes in JuMP (#773)

v1.8.1 (August 5, 2024)

Fixed

  • Fixed various issues with SDDP.Threaded() (#761)
  • Fixed a deprecation warning for sorting a dictionary (#763)

Other

  • Updated copyright notices (#762)
  • Updated .JuliaFormatter.toml (#764)

v1.8.0 (July 24, 2024)

Added

  • Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)

Other

  • Documentation improvements and fixes (#747) (#759)

v1.7.0 (June 4, 2024)

Added

  • Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)

Fixed

  • Fixed error message when publication_plot has non-finite data (#738)

Other

  • Updated the logo constructor (#730)

v1.6.7 (February 1, 2024)

Fixed

  • Fixed non-constant state dimension in the MSPFormat reader (#695)
  • Fixed SimulatorSamplingScheme for deterministic nodes (#710)
  • Fixed line search in BFGS (#711)
  • Fixed handling of NEARLY_FEASIBLE_POINT status (#726)

Other

  • Documentation improvements (#692) (#694) (#706) (#716) (#727)
  • Updated to StochOptFormat v1.0 (#705)
  • Added an experimental OuterApproximation algorithm (#709)
  • Updated .gitignore (#717)
  • Added code for MDP paper (#720) (#721)
  • Added Google analytics (#723)

v1.6.6 (September 29, 2023)

Other

v1.6.5 (September 25, 2023)

Fixed

Other

v1.6.4 (September 23, 2023)

Fixed

Other

  • Documentation updates (#658) (#666) (#671)
  • Switch to GitHub action for deploying docs (#668) (#670)
  • Update to Documenter@1 (#669)

v1.6.3 (September 8, 2023)

Fixed

  • Fixed default stopping rule with iteration_limit or time_limit set (#662)

Other

v1.6.2 (August 24, 2023)

Fixed

  • MSPFormat now detect and exploit stagewise independent lattices (#653)
  • Fixed set_optimizer for models read from file (#654)

Other

  • Fixed typo in pglib_opf.jl (#647)
  • Fixed documentation build and added color (#652)

v1.6.1 (July 20, 2023)

Fixed

  • Fixed bugs in MSPFormat reader (#638) (#639)

Other

  • Clarified OutOfSampleMonteCarlo docstring (#643)

v1.6.0 (July 3, 2023)

Added

Other

v1.5.1 (June 30, 2023)

This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a "good" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.

Other

  • Fixed various typos in the documentation (#617)
  • Fixed printing test after changes in JuMP (#618)
  • Set SimulationStoppingRule as the default stopping rule (#619)
  • Changed the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)
  • Added example usage with Distributions.jl (@slwu89) (#622)
  • Removed the numerical issue @warn (#627)
  • Improved the quality of docstrings (#630)

v1.5.0 (May 14, 2023)

Added

  • Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)

Other

  • Updated missing changelog entries (#608)
  • Removed global variables (#610)
  • Converted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)
  • Fixed some typos (#613)

v1.4.0 (May 8, 2023)

Added

Fixed

  • Fixed parsing of some MSPFormat files (#602) (#604)
  • Fixed printing in header (#605)

v1.3.0 (May 3, 2023)

Added

  • Added experimental support for SDDP.MSPFormat.read_from_file (#593)

Other

  • Updated to StochOptFormat v0.3 (#600)

v1.2.1 (May 1, 2023)

Fixed

  • Fixed log_every_seconds (#597)

v1.2.0 (May 1, 2023)

Added

Other

  • Tweaked how the log is printed (#588)
  • Updated to StochOptFormat v0.2 (#592)

v1.1.4 (April 10, 2023)

Fixed

  • Logs are now flushed every iteration (#584)

Other

  • Added docstrings to various functions (#581)
  • Minor documentation updates (#580)
  • Clarified integrality documentation (#582)
  • Updated the README (#585)
  • Number of numerical issues is now printed to the log (#586)

v1.1.3 (April 2, 2023)

Other

v1.1.2 (March 18, 2023)

Other

v1.1.1 (March 16, 2023)

Other

  • Fixed email in Project.toml
  • Added notebook to documentation tutorials (#571)

v1.1.0 (January 12, 2023)

Added

v1.0.0 (January 3, 2023)

Although we're bumping MAJOR version, this is a non-breaking release. Going forward:

  • New features will bump the MINOR version
  • Bug fixes, maintenance, and documentation updates will bump the PATCH version
  • We will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version
  • Updates to the compat bounds of package dependencies will bump the PATCH version.

We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.

Added

Other

v0.4.9 (January 3, 2023)

Added

Other

  • Added tutorial on Markov Decision Processes (#556)
  • Added two-stage newsvendor tutorial (#557)
  • Refactored the layout of the documentation (#554) (#555)
  • Updated copyright to 2023 (#558)
  • Fixed errors in the documentation (#561)

v0.4.8 (December 19, 2022)

Added

Fixed

  • Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)

v0.4.7 (December 17, 2022)

Added

  • Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)

Fixed

  • Rethrow InterruptException when solver is interrupted (#534)
  • Fixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)
  • Fixed re-using the dashboard = true option between solves (#538)
  • Fixed bug when no @stageobjective is set (now defaults to 0.0) (#539)
  • Fixed errors thrown when invalid inputs are provided to add_objective_state (#540)

Other

  • Drop support for Julia versions prior to 1.6 (#533)
  • Updated versions of dependencies (#522) (#533)
  • Switched to HiGHS in the documentation and tests (#533)
  • Added license headers (#519)
  • Fixed link in air conditioning example (#521) (Thanks @conema)
  • Clarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)
  • Added this change log (#536)
  • Cuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)

v0.4.6 (March 25, 2022)

Other

  • Updated to JuMP v1.0 (#517)

v0.4.5 (March 9, 2022)

Fixed

  • Fixed issue with set_silent in a subproblem (#510)

Other

v0.4.4 (December 11, 2021)

Added

  • Added BanditDuality (#471)
  • Added benchmark scripts (#475) (#476) (#490)
  • write_cuts_to_file now saves visited states (#468)

Fixed

  • Fixed BoundStalling in a deterministic policy (#470) (#474)
  • Fixed magnitude warning with zero coefficients (#483)

Other

  • Improvements to LagrangianDuality (#481) (#482) (#487)
  • Improvements to StrengthenedConicDuality (#486)
  • Switch to functional form for the tests (#478)
  • Fixed typos (#472) (Thanks @vfdev-5)
  • Update to JuMP v0.22 (#498)

v0.4.3 (August 31, 2021)

Added

  • Added biobjective solver (#462)
  • Added forward_pass_callback (#466)

Other

  • Update tutorials and documentation (#459) (#465)
  • Organize how paper materials are stored (#464)

v0.4.2 (August 24, 2021)

Fixed

  • Fixed a bug in Lagrangian duality (#457)

v0.4.1 (August 23, 2021)

Other

  • Minor changes to our implementation of LagrangianDuality (#454) (#455)

v0.4.0 (August 17, 2021)

Breaking

Other

v0.3.17 (July 6, 2021)

Added

Other

  • Display more model attributes (#438)
  • Documentation improvements (#433) (#437) (#439)

v0.3.16 (June 17, 2021)

Added

Other

  • Update risk measure docstrings (#418)

v0.3.15 (June 1, 2021)

Added

Fixed

Other

  • Add JuliaFormatter (#412)
  • Documentation improvements (#406) (#408)

v0.3.14 (March 30, 2021)

Fixed

  • Fixed O(N^2) behavior in get_same_children (#393)

v0.3.13 (March 27, 2021)

Fixed

  • Fixed bug in print.jl
  • Fixed compat of Reexport (#388)

v0.3.12 (March 22, 2021)

Added

  • Added problem statistics to header (#385) (#386)

Fixed

  • Fixed subtypes in visualization (#384)

v0.3.11 (March 22, 2021)

Fixed

  • Fixed constructor in direct mode (#383)

Other

  • Fix documentation (#379)

v0.3.10 (February 23, 2021)

Fixed

  • Fixed seriescolor in publication plot (#376)

v0.3.9 (February 20, 2021)

Added

  • Add option to simulate with different incoming state (#372)
  • Added warning for cuts with high dynamic range (#373)

Fixed

  • Fixed seriesalpha in publication plot (#375)

v0.3.8 (January 19, 2021)

Other

v0.3.7 (January 8, 2021)

Other

v0.3.6 (December 17, 2020)

Other

  • Fix typos (#358)
  • Collapse navigation bar in docs (#359)
  • Update TagBot.yml (#361)

v0.3.5 (November 18, 2020)

Other

  • Update citations (#348)
  • Switch to GitHub actions (#355)

v0.3.4 (August 25, 2020)

Added

  • Added non-uniform distributionally robust risk measure (#328)
  • Added numerical recovery functions (#330)
  • Added experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)
  • Added entropic risk measure (#347)

Other

v0.3.3 (June 19, 2020)

Added

  • Added asynchronous support for price and belief states (#325)
  • Added ForwardPass plug-in system (#320)

Fixed

  • Fix check for probabilities in Markovian graph (#322)

v0.3.2 (April 6, 2020)

Added

Other

  • Improve error message in deterministic equivalent (#312)
  • Update to RecipesBase 1.0 (#313)

v0.3.1 (February 26, 2020)

Fixed

  • Fixed filename in integrality_handlers.jl (#304)

v0.3.0 (February 20, 2020)

Breaking

  • Breaking changes to update to JuMP v0.21 (#300).

v0.2.4 (February 7, 2020)

Added

  • Added a counter for the number of total subproblem solves (#301)

Other

  • Update formatter (#298)
  • Added tests (#299)

v0.2.3 (January 24, 2020)

Added

  • Added support for convex risk measures (#294)

Fixed

  • Fixed bug when subproblem is infeasible (#296)
  • Fixed bug in deterministic equivalent (#297)

Other

  • Added example from IJOC paper (#293)

v0.2.2 (January 10, 2020)

Fixed

  • Fixed flakey time limit in tests (#291)

Other

  • Removed MathOptFormat.jl (#289)
  • Update copyright (#290)

v0.2.1 (December 19, 2019)

Added

  • Added support for approximating a Markov lattice (#282) (#285)
  • Add tools for visualizing the value function (#272) (#286)
  • Write .mof.json files on error (#284)

Other

  • Improve documentation (#281) (#283)
  • Update tests for Julia 1.3 (#287)

v0.2.0 (December 16, 2019)

This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.

Added

  • Added asynchronous parallel implementation (#277)
  • Added roll-out algorithm for cyclic graphs (#279)

Other

  • Improved error messages in PolicyGraph (#271)
  • Added JuliaFormatter (#273) (#276)
  • Fixed compat bounds (#274) (#278)
  • Added documentation for simulating non-standard graphs (#280)

v0.1.0 (October 17, 2019)

A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.

Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.

The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.

v0.0.1 (April 18, 2018)

Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.

+

Release notes

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

v1.10.1 (November 28, 2024)

Fixed

Other

  • Documentation updates (#801)

v1.10.0 (November 19, 2024)

Added

  • Added root_node_risk_measure keyword to train (#804)

Fixed

  • Fixed a bug with cut sharing in a graph with zero-probability arcs (#797)

Other

v1.9.0 (October 17, 2024)

Added

Fixed

  • Fixed the tests to skip threading tests if running in serial (#770)
  • Fixed BanditDuality to handle the case where the standard deviation is NaN (#779)
  • Fixed an error when lagged state variables are encountered in MSPFormat (#786)
  • Fixed publication_plot with replications of different lengths (#788)
  • Fixed CTRL+C interrupting the code at unsafe points (#789)

Other

  • Documentation improvements (#771) (#772)
  • Updated printing because of changes in JuMP (#773)

v1.8.1 (August 5, 2024)

Fixed

  • Fixed various issues with SDDP.Threaded() (#761)
  • Fixed a deprecation warning for sorting a dictionary (#763)

Other

  • Updated copyright notices (#762)
  • Updated .JuliaFormatter.toml (#764)

v1.8.0 (July 24, 2024)

Added

  • Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)

Other

  • Documentation improvements and fixes (#747) (#759)

v1.7.0 (June 4, 2024)

Added

  • Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)

Fixed

  • Fixed error message when publication_plot has non-finite data (#738)

Other

  • Updated the logo constructor (#730)

v1.6.7 (February 1, 2024)

Fixed

  • Fixed non-constant state dimension in the MSPFormat reader (#695)
  • Fixed SimulatorSamplingScheme for deterministic nodes (#710)
  • Fixed line search in BFGS (#711)
  • Fixed handling of NEARLY_FEASIBLE_POINT status (#726)

Other

  • Documentation improvements (#692) (#694) (#706) (#716) (#727)
  • Updated to StochOptFormat v1.0 (#705)
  • Added an experimental OuterApproximation algorithm (#709)
  • Updated .gitignore (#717)
  • Added code for MDP paper (#720) (#721)
  • Added Google analytics (#723)

v1.6.6 (September 29, 2023)

Other

v1.6.5 (September 25, 2023)

Fixed

Other

v1.6.4 (September 23, 2023)

Fixed

Other

  • Documentation updates (#658) (#666) (#671)
  • Switch to GitHub action for deploying docs (#668) (#670)
  • Update to Documenter@1 (#669)

v1.6.3 (September 8, 2023)

Fixed

  • Fixed default stopping rule with iteration_limit or time_limit set (#662)

Other

v1.6.2 (August 24, 2023)

Fixed

  • MSPFormat now detect and exploit stagewise independent lattices (#653)
  • Fixed set_optimizer for models read from file (#654)

Other

  • Fixed typo in pglib_opf.jl (#647)
  • Fixed documentation build and added color (#652)

v1.6.1 (July 20, 2023)

Fixed

  • Fixed bugs in MSPFormat reader (#638) (#639)

Other

  • Clarified OutOfSampleMonteCarlo docstring (#643)

v1.6.0 (July 3, 2023)

Added

Other

v1.5.1 (June 30, 2023)

This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a "good" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.

Other

  • Fixed various typos in the documentation (#617)
  • Fixed printing test after changes in JuMP (#618)
  • Set SimulationStoppingRule as the default stopping rule (#619)
  • Changed the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)
  • Added example usage with Distributions.jl (@slwu89) (#622)
  • Removed the numerical issue @warn (#627)
  • Improved the quality of docstrings (#630)

v1.5.0 (May 14, 2023)

Added

  • Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)

Other

  • Updated missing changelog entries (#608)
  • Removed global variables (#610)
  • Converted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)
  • Fixed some typos (#613)

v1.4.0 (May 8, 2023)

Added

Fixed

  • Fixed parsing of some MSPFormat files (#602) (#604)
  • Fixed printing in header (#605)

v1.3.0 (May 3, 2023)

Added

  • Added experimental support for SDDP.MSPFormat.read_from_file (#593)

Other

  • Updated to StochOptFormat v0.3 (#600)

v1.2.1 (May 1, 2023)

Fixed

  • Fixed log_every_seconds (#597)

v1.2.0 (May 1, 2023)

Added

Other

  • Tweaked how the log is printed (#588)
  • Updated to StochOptFormat v0.2 (#592)

v1.1.4 (April 10, 2023)

Fixed

  • Logs are now flushed every iteration (#584)

Other

  • Added docstrings to various functions (#581)
  • Minor documentation updates (#580)
  • Clarified integrality documentation (#582)
  • Updated the README (#585)
  • Number of numerical issues is now printed to the log (#586)

v1.1.3 (April 2, 2023)

Other

v1.1.2 (March 18, 2023)

Other

v1.1.1 (March 16, 2023)

Other

  • Fixed email in Project.toml
  • Added notebook to documentation tutorials (#571)

v1.1.0 (January 12, 2023)

Added

v1.0.0 (January 3, 2023)

Although we're bumping MAJOR version, this is a non-breaking release. Going forward:

  • New features will bump the MINOR version
  • Bug fixes, maintenance, and documentation updates will bump the PATCH version
  • We will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version
  • Updates to the compat bounds of package dependencies will bump the PATCH version.

We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.

Added

Other

v0.4.9 (January 3, 2023)

Added

Other

  • Added tutorial on Markov Decision Processes (#556)
  • Added two-stage newsvendor tutorial (#557)
  • Refactored the layout of the documentation (#554) (#555)
  • Updated copyright to 2023 (#558)
  • Fixed errors in the documentation (#561)

v0.4.8 (December 19, 2022)

Added

Fixed

  • Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)

v0.4.7 (December 17, 2022)

Added

  • Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)

Fixed

  • Rethrow InterruptException when solver is interrupted (#534)
  • Fixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)
  • Fixed re-using the dashboard = true option between solves (#538)
  • Fixed bug when no @stageobjective is set (now defaults to 0.0) (#539)
  • Fixed errors thrown when invalid inputs are provided to add_objective_state (#540)

Other

  • Drop support for Julia versions prior to 1.6 (#533)
  • Updated versions of dependencies (#522) (#533)
  • Switched to HiGHS in the documentation and tests (#533)
  • Added license headers (#519)
  • Fixed link in air conditioning example (#521) (Thanks @conema)
  • Clarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)
  • Added this change log (#536)
  • Cuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)

v0.4.6 (March 25, 2022)

Other

  • Updated to JuMP v1.0 (#517)

v0.4.5 (March 9, 2022)

Fixed

  • Fixed issue with set_silent in a subproblem (#510)

Other

v0.4.4 (December 11, 2021)

Added

  • Added BanditDuality (#471)
  • Added benchmark scripts (#475) (#476) (#490)
  • write_cuts_to_file now saves visited states (#468)

Fixed

  • Fixed BoundStalling in a deterministic policy (#470) (#474)
  • Fixed magnitude warning with zero coefficients (#483)

Other

  • Improvements to LagrangianDuality (#481) (#482) (#487)
  • Improvements to StrengthenedConicDuality (#486)
  • Switch to functional form for the tests (#478)
  • Fixed typos (#472) (Thanks @vfdev-5)
  • Update to JuMP v0.22 (#498)

v0.4.3 (August 31, 2021)

Added

  • Added biobjective solver (#462)
  • Added forward_pass_callback (#466)

Other

  • Update tutorials and documentation (#459) (#465)
  • Organize how paper materials are stored (#464)

v0.4.2 (August 24, 2021)

Fixed

  • Fixed a bug in Lagrangian duality (#457)

v0.4.1 (August 23, 2021)

Other

  • Minor changes to our implementation of LagrangianDuality (#454) (#455)

v0.4.0 (August 17, 2021)

Breaking

Other

v0.3.17 (July 6, 2021)

Added

Other

  • Display more model attributes (#438)
  • Documentation improvements (#433) (#437) (#439)

v0.3.16 (June 17, 2021)

Added

Other

  • Update risk measure docstrings (#418)

v0.3.15 (June 1, 2021)

Added

Fixed

Other

  • Add JuliaFormatter (#412)
  • Documentation improvements (#406) (#408)

v0.3.14 (March 30, 2021)

Fixed

  • Fixed O(N^2) behavior in get_same_children (#393)

v0.3.13 (March 27, 2021)

Fixed

  • Fixed bug in print.jl
  • Fixed compat of Reexport (#388)

v0.3.12 (March 22, 2021)

Added

  • Added problem statistics to header (#385) (#386)

Fixed

  • Fixed subtypes in visualization (#384)

v0.3.11 (March 22, 2021)

Fixed

  • Fixed constructor in direct mode (#383)

Other

  • Fix documentation (#379)

v0.3.10 (February 23, 2021)

Fixed

  • Fixed seriescolor in publication plot (#376)

v0.3.9 (February 20, 2021)

Added

  • Add option to simulate with different incoming state (#372)
  • Added warning for cuts with high dynamic range (#373)

Fixed

  • Fixed seriesalpha in publication plot (#375)

v0.3.8 (January 19, 2021)

Other

v0.3.7 (January 8, 2021)

Other

v0.3.6 (December 17, 2020)

Other

  • Fix typos (#358)
  • Collapse navigation bar in docs (#359)
  • Update TagBot.yml (#361)

v0.3.5 (November 18, 2020)

Other

  • Update citations (#348)
  • Switch to GitHub actions (#355)

v0.3.4 (August 25, 2020)

Added

  • Added non-uniform distributionally robust risk measure (#328)
  • Added numerical recovery functions (#330)
  • Added experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)
  • Added entropic risk measure (#347)

Other

v0.3.3 (June 19, 2020)

Added

  • Added asynchronous support for price and belief states (#325)
  • Added ForwardPass plug-in system (#320)

Fixed

  • Fix check for probabilities in Markovian graph (#322)

v0.3.2 (April 6, 2020)

Added

Other

  • Improve error message in deterministic equivalent (#312)
  • Update to RecipesBase 1.0 (#313)

v0.3.1 (February 26, 2020)

Fixed

  • Fixed filename in integrality_handlers.jl (#304)

v0.3.0 (February 20, 2020)

Breaking

  • Breaking changes to update to JuMP v0.21 (#300).

v0.2.4 (February 7, 2020)

Added

  • Added a counter for the number of total subproblem solves (#301)

Other

  • Update formatter (#298)
  • Added tests (#299)

v0.2.3 (January 24, 2020)

Added

  • Added support for convex risk measures (#294)

Fixed

  • Fixed bug when subproblem is infeasible (#296)
  • Fixed bug in deterministic equivalent (#297)

Other

  • Added example from IJOC paper (#293)

v0.2.2 (January 10, 2020)

Fixed

  • Fixed flakey time limit in tests (#291)

Other

  • Removed MathOptFormat.jl (#289)
  • Update copyright (#290)

v0.2.1 (December 19, 2019)

Added

  • Added support for approximating a Markov lattice (#282) (#285)
  • Add tools for visualizing the value function (#272) (#286)
  • Write .mof.json files on error (#284)

Other

  • Improve documentation (#281) (#283)
  • Update tests for Julia 1.3 (#287)

v0.2.0 (December 16, 2019)

This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.

Added

  • Added asynchronous parallel implementation (#277)
  • Added roll-out algorithm for cyclic graphs (#279)

Other

  • Improved error messages in PolicyGraph (#271)
  • Added JuliaFormatter (#273) (#276)
  • Fixed compat bounds (#274) (#278)
  • Added documentation for simulating non-standard graphs (#280)

v0.1.0 (October 17, 2019)

A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.

Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.

The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.

v0.0.1 (April 18, 2018)

Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.

diff --git a/previews/PR810/tutorial/SDDP.log b/previews/PR810/tutorial/SDDP.log index f1905ff51..949251381 100644 --- a/previews/PR810/tutorial/SDDP.log +++ b/previews/PR810/tutorial/SDDP.log @@ -4,7 +4,7 @@ problem nodes : 30 state variables : 5 - scenarios : 8.04688e+11 + scenarios : 1.00781e+12 existing cuts : false options solver : serial mode @@ -23,24 +23,24 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 -2.446694e+01 6.206461e+01 1.255160e+00 162 1 - 58 1.282673e+01 7.838999e+00 2.271558e+00 9396 1 - 108 8.447310e+00 7.830674e+00 3.283115e+00 17496 1 - 150 9.088225e+00 7.828798e+00 4.290896e+00 24300 1 - 190 1.057515e+01 7.827192e+00 5.306258e+00 30780 1 - 227 8.600332e+00 7.826181e+00 6.324924e+00 36774 1 - 261 9.488365e+00 7.825666e+00 7.331886e+00 42282 1 - 294 8.636328e+00 7.824883e+00 8.337678e+00 47628 1 - 326 8.814296e+00 7.824712e+00 9.359044e+00 52812 1 - 465 8.803878e+00 7.824560e+00 1.437870e+01 75330 1 - 589 8.273399e+00 7.824390e+00 1.937982e+01 95418 1 - 604 9.951259e+00 7.824390e+00 2.002007e+01 97848 1 + 1 -4.199992e+01 5.821554e+01 1.260870e+00 162 1 + 60 1.033168e+01 7.916707e+00 2.262791e+00 9720 1 + 107 9.111379e+00 7.910481e+00 3.275479e+00 17334 1 + 152 7.959466e+00 7.904751e+00 4.281190e+00 24624 1 + 194 6.940984e+00 7.904578e+00 5.281531e+00 31428 1 + 230 8.432877e+00 7.904218e+00 6.304367e+00 37260 1 + 266 9.657901e+00 7.903732e+00 7.321733e+00 43092 1 + 299 9.353248e+00 7.903401e+00 8.336339e+00 48438 1 + 332 8.753305e+00 7.903401e+00 9.357229e+00 53784 1 + 469 9.297445e+00 7.902829e+00 1.439343e+01 75978 1 + 589 8.913457e+00 7.902545e+00 1.939896e+01 95418 1 + 602 1.036420e+01 7.902495e+00 2.001406e+01 97524 1 ------------------------------------------------------------------- status : time_limit -total time (s) : 2.002007e+01 -total solves : 97848 -best bound : 7.824390e+00 -simulation ci : 8.882771e+00 ± 2.950991e-01 +total time (s) : 2.001406e+01 +total solves : 97524 +best bound : 7.902495e+00 +simulation ci : 8.773997e+00 ± 3.705325e-01 numeric issues : 0 ------------------------------------------------------------------- @@ -70,52 +70,52 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 7.288656e+02 6.317139e-03 103 1 - 2 6.329607e+02 6.037296e+02 2.274394e-02 406 1 - 3 5.685322e+02 5.678895e+02 2.728415e-02 509 1 - 4 5.056097e+02 5.631990e+02 3.185511e-02 612 1 - 5 6.236593e+02 5.614769e+02 3.642607e-02 715 1 - 6 6.121789e+02 5.610169e+02 4.096293e-02 818 1 - 7 4.662857e+02 5.609613e+02 4.544497e-02 921 1 - 8 5.999526e+02 5.609230e+02 5.004311e-02 1024 1 - 9 6.143762e+02 5.609219e+02 5.454993e-02 1127 1 - 10 5.006445e+02 5.609219e+02 5.924010e-02 1230 1 - 11 4.564595e+02 5.609219e+02 6.395102e-02 1333 1 - 12 5.025865e+02 5.609219e+02 6.868196e-02 1436 1 - 13 5.290270e+02 5.609219e+02 7.337999e-02 1539 1 - 14 6.147415e+02 5.609219e+02 7.820606e-02 1642 1 - 15 6.147415e+02 5.609219e+02 8.290410e-02 1745 1 - 16 6.147415e+02 5.609219e+02 8.758092e-02 1848 1 - 17 6.147415e+02 5.609219e+02 9.228015e-02 1951 1 - 18 6.147415e+02 5.609219e+02 9.696007e-02 2054 1 - 19 6.147415e+02 5.609219e+02 1.017129e-01 2157 1 - 20 3.940125e+02 5.609219e+02 1.064010e-01 2260 1 - 21 4.621623e+02 5.609219e+02 1.256771e-01 2563 1 - 22 4.906642e+02 5.609219e+02 1.304090e-01 2666 1 - 23 6.134997e+02 5.609219e+02 1.352000e-01 2769 1 - 24 3.803713e+02 5.609219e+02 1.398981e-01 2872 1 - 25 4.621623e+02 5.609219e+02 1.446111e-01 2975 1 - 26 6.147415e+02 5.609219e+02 1.493549e-01 3078 1 - 27 5.871833e+02 5.609219e+02 1.540511e-01 3181 1 - 28 6.147415e+02 5.609219e+02 1.588180e-01 3284 1 - 29 6.147415e+02 5.609219e+02 1.635649e-01 3387 1 - 30 6.147415e+02 5.609219e+02 1.683011e-01 3490 1 - 31 6.147415e+02 5.609219e+02 1.730430e-01 3593 1 - 32 6.147415e+02 5.609219e+02 1.778190e-01 3696 1 - 33 5.346051e+02 5.609219e+02 1.825991e-01 3799 1 - 34 6.134997e+02 5.609219e+02 1.874511e-01 3902 1 - 35 6.147415e+02 5.609219e+02 1.922271e-01 4005 1 - 36 6.147415e+02 5.609219e+02 1.971161e-01 4108 1 - 37 6.049568e+02 5.609219e+02 2.018850e-01 4211 1 - 38 3.957895e+02 5.609219e+02 2.067280e-01 4314 1 - 39 4.592685e+02 5.609219e+02 2.115819e-01 4417 1 - 40 6.147415e+02 5.609219e+02 2.163730e-01 4520 1 + 1 0.000000e+00 7.100460e+02 6.299019e-03 103 1 + 2 5.646406e+02 5.948874e+02 2.312112e-02 406 1 + 3 4.476627e+02 5.634178e+02 2.772212e-02 509 1 + 4 6.398440e+02 5.588246e+02 3.233790e-02 612 1 + 5 5.204798e+02 5.565456e+02 3.694201e-02 715 1 + 6 5.263013e+02 5.564357e+02 4.145813e-02 818 1 + 7 6.111492e+02 5.563948e+02 4.591513e-02 921 1 + 8 6.130155e+02 5.563770e+02 5.047011e-02 1024 1 + 9 6.140858e+02 5.563722e+02 5.497694e-02 1127 1 + 10 5.927906e+02 5.563722e+02 5.946493e-02 1230 1 + 11 5.102792e+02 5.563722e+02 6.396890e-02 1333 1 + 12 6.137444e+02 5.563722e+02 6.880713e-02 1436 1 + 13 4.954641e+02 5.563722e+02 7.348204e-02 1539 1 + 14 6.137444e+02 5.563722e+02 7.818413e-02 1642 1 + 15 5.150817e+02 5.563722e+02 8.286810e-02 1745 1 + 16 4.964032e+02 5.563722e+02 8.755493e-02 1848 1 + 17 6.011079e+02 5.563722e+02 9.227395e-02 1951 1 + 18 4.753277e+02 5.563722e+02 9.699798e-02 2054 1 + 19 6.137444e+02 5.563722e+02 1.017599e-01 2157 1 + 20 6.137444e+02 5.563722e+02 1.064529e-01 2260 1 + 21 5.861319e+02 5.563722e+02 1.258860e-01 2563 1 + 22 6.137444e+02 5.563722e+02 1.306260e-01 2666 1 + 23 6.137444e+02 5.563722e+02 1.353610e-01 2769 1 + 24 6.137444e+02 5.563722e+02 1.400399e-01 2872 1 + 25 6.137444e+02 5.563722e+02 2.465379e-01 2975 1 + 26 6.137444e+02 5.563722e+02 2.515249e-01 3078 1 + 27 5.508483e+02 5.563722e+02 2.564220e-01 3181 1 + 28 4.036025e+02 5.563722e+02 2.613389e-01 3284 1 + 29 5.180160e+02 5.563722e+02 2.662160e-01 3387 1 + 30 5.872052e+02 5.563722e+02 2.712021e-01 3490 1 + 31 5.036519e+02 5.563722e+02 2.761960e-01 3593 1 + 32 6.137444e+02 5.563722e+02 2.812769e-01 3696 1 + 33 6.137444e+02 5.563722e+02 2.862799e-01 3799 1 + 34 4.753277e+02 5.563722e+02 2.913539e-01 3902 1 + 35 6.137444e+02 5.563722e+02 2.964051e-01 4005 1 + 36 5.227535e+02 5.563722e+02 3.013721e-01 4108 1 + 37 4.626982e+02 5.563722e+02 3.063509e-01 4211 1 + 38 6.137444e+02 5.563722e+02 3.113050e-01 4314 1 + 39 6.137444e+02 5.563722e+02 3.162961e-01 4417 1 + 40 6.137444e+02 5.563722e+02 3.213129e-01 4520 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.163730e-01 +total time (s) : 3.213129e-01 total solves : 4520 -best bound : 5.609219e+02 -simulation ci : 5.457892e+02 ± 3.603988e+01 +best bound : 5.563722e+02 +simulation ci : 5.510009e+02 ± 3.348500e+01 numeric issues : 0 ------------------------------------------------------------------- @@ -145,11 +145,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.079600e+03 3.157700e+02 4.199100e-02 104 1 - 10 6.829100e+02 6.829100e+02 1.441059e-01 1040 1 + 1 1.079600e+03 3.157700e+02 4.440188e-02 104 1 + 10 6.829100e+02 6.829100e+02 1.417639e-01 1040 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.441059e-01 +total time (s) : 1.417639e-01 total solves : 1040 best bound : 6.829100e+02 simulation ci : 7.289889e+02 ± 7.726064e+01 @@ -181,16 +181,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 4.759100e+02 1.200258e+02 4.662895e-02 208 1 - 46 7.062877e+01 2.475505e+02 1.058435e+00 9568 1 - 84 1.106338e+02 2.631204e+02 2.063247e+00 17472 1 - 100 3.928304e+02 2.672125e+02 2.518251e+00 20800 1 + 1 0.000000e+00 0.000000e+00 4.445004e-02 208 1 + 47 1.393051e+02 2.492638e+02 1.057663e+00 9776 1 + 86 1.564354e+02 2.659338e+02 2.084841e+00 17888 1 + 100 3.631193e+02 2.683719e+02 2.479496e+00 20800 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.518251e+00 +total time (s) : 2.479496e+00 total solves : 20800 -best bound : 2.672125e+02 -simulation ci : 3.123973e+02 ± 4.765300e+01 +best bound : 2.683719e+02 +simulation ci : 2.733888e+02 ± 3.837418e+01 numeric issues : 0 ------------------------------------------------------------------- @@ -219,33 +219,36 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.294630e+04 4.219840e+04 2.404258e-01 2707 1 - 5 1.552348e+05 9.082613e+04 1.774935e+00 19567 1 - 12 2.659207e+05 9.302003e+04 4.033805e+00 40596 1 - 15 1.484517e+05 9.319342e+04 5.628028e+00 52877 1 - 18 9.571809e+03 9.323922e+04 6.634310e+00 59958 1 - 20 3.876251e+05 9.326436e+04 9.501914e+00 78476 1 - 25 1.837013e+05 9.334504e+04 1.454321e+01 108443 1 - 29 5.378171e+04 9.335848e+04 1.979666e+01 135495 1 - 37 2.330216e+05 9.337001e+04 2.659127e+01 166303 1 - 43 7.152926e+04 9.337441e+04 3.235910e+01 189825 1 - 51 1.426944e+05 9.337775e+04 3.750609e+01 209401 1 - 61 1.951056e+04 9.337986e+04 4.271820e+01 228151 1 - 68 1.357792e+05 9.338090e+04 4.798858e+01 243980 1 - 75 8.543836e+04 9.338263e+04 5.417524e+01 263761 1 - 79 1.440154e+05 9.338364e+04 6.035429e+01 282285 1 - 81 1.599147e+05 9.338378e+04 6.688132e+01 301011 1 - 85 6.785331e+04 9.338609e+04 7.196572e+01 315167 1 - 88 1.421650e+05 9.338729e+04 7.715207e+01 329320 1 - 92 7.093559e+04 9.338862e+04 8.297624e+01 344516 1 - 94 3.166021e+05 9.338889e+04 9.057555e+01 363450 1 - 100 1.084466e+05 9.339011e+04 9.695549e+01 379068 1 + 1 3.129477e+04 2.410097e+04 1.429400e-01 1459 1 + 7 3.912259e+04 8.832886e+04 1.330084e+00 15205 1 + 10 1.083430e+05 9.250045e+04 2.357529e+00 26238 1 + 13 2.588539e+05 9.329172e+04 5.359828e+00 45799 1 + 14 2.504203e+05 9.334514e+04 6.836051e+00 56618 1 + 16 1.205895e+05 9.334634e+04 7.912949e+00 63904 1 + 21 1.145414e+05 9.335654e+04 1.300891e+01 94079 1 + 30 1.712406e+05 9.337112e+04 1.924543e+01 126762 1 + 36 3.406886e+05 9.337325e+04 2.506985e+01 153612 1 + 47 4.332582e+04 9.337875e+04 3.027704e+01 175901 1 + 51 1.634254e+05 9.337981e+04 3.577433e+01 197545 1 + 53 3.974429e+05 9.338067e+04 4.172650e+01 218559 1 + 54 4.038175e+05 9.338101e+04 4.688607e+01 236034 1 + 61 1.649721e+05 9.338615e+04 5.283478e+01 254983 1 + 64 3.177687e+05 9.338634e+04 5.924092e+01 274544 1 + 66 1.436600e+05 9.338666e+04 6.546633e+01 292854 1 + 68 3.437550e+05 9.338708e+04 7.089886e+01 308252 1 + 71 2.662122e+05 9.338883e+04 7.811391e+01 327813 1 + 74 2.533959e+05 9.339006e+04 8.535554e+01 346542 1 + 79 1.620139e+05 9.339146e+04 9.251919e+01 364237 1 + 85 1.495574e+05 9.339233e+04 1.002059e+02 382559 1 + 91 1.701819e+05 9.339296e+04 1.052799e+02 394433 1 + 95 1.221699e+05 9.339330e+04 1.125305e+02 410461 1 + 100 3.531429e+04 9.339343e+04 1.179270e+02 422124 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 9.695549e+01 -total solves : 379068 -best bound : 9.339011e+04 -simulation ci : 8.466519e+04 ± 1.533736e+04 +total time (s) : 1.179270e+02 +total solves : 422124 +best bound : 9.339343e+04 +simulation ci : 9.498564e+04 ± 1.929349e+04 numeric issues : 0 ------------------------------------------------------------------- @@ -274,14 +277,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.750000e+04 2.500000e+03 3.592968e-03 12 1 - 10 1.000000e+04 8.333333e+03 1.350904e-02 120 1 + 1 2.750000e+04 3.437500e+03 3.911018e-03 12 1 + 10 5.000000e+03 8.333333e+03 1.412487e-02 120 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.350904e-02 +total time (s) : 1.412487e-02 total solves : 120 best bound : 8.333333e+03 -simulation ci : 8.000000e+03 ± 2.400500e+03 +simulation ci : 8.031250e+03 ± 4.822873e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -311,23 +314,19 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 4.555632e+05 4.573582e+04 1.892209e-02 212 1 - 56 1.154817e+05 1.443323e+05 1.029831e+00 15172 1 - 107 1.529994e+05 1.443373e+05 2.034419e+00 28184 1 - 169 1.632711e+05 1.443373e+05 3.050451e+00 41328 1 - 216 1.761974e+05 1.443373e+05 4.057784e+00 52392 1 - 265 1.039868e+05 1.443373e+05 5.074113e+00 62780 1 - 305 6.508158e+04 1.443373e+05 6.094956e+00 72360 1 - 347 1.746395e+05 1.443373e+05 7.099320e+00 81264 1 - 386 1.116079e+05 1.443373e+05 8.183207e+00 90632 1 - 429 2.027237e+05 1.443374e+05 9.197365e+00 99748 1 - 485 1.255026e+05 1.443374e+05 1.050865e+01 111620 1 + 1 3.555632e+05 4.573582e+04 1.939392e-02 212 1 + 55 1.726543e+05 1.443370e+05 1.076247e+00 14960 1 + 110 1.879026e+05 1.443374e+05 2.077597e+00 28820 1 + 174 1.325763e+05 1.443374e+05 3.090309e+00 42388 1 + 225 1.785132e+05 1.443374e+05 4.096940e+00 54300 1 + 279 1.046605e+05 1.443374e+05 5.102572e+00 65748 1 + 288 1.135447e+05 1.443374e+05 5.356144e+00 67656 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.050865e+01 -total solves : 111620 +total time (s) : 5.356144e+00 +total solves : 67656 best bound : 1.443374e+05 -simulation ci : 1.444482e+05 ± 2.751330e+03 +simulation ci : 1.441118e+05 ± 3.704570e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -356,29 +355,29 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.269150e+06 4.703021e+04 2.517104e-02 421 1 - 30 2.976537e+05 2.926072e+05 1.056402e+00 14604 1 - 56 9.668994e+04 3.115604e+05 2.064710e+00 26054 1 - 76 2.051197e+05 3.124173e+05 3.079618e+00 36679 1 - 97 5.345856e+05 3.126419e+05 4.127804e+00 46780 1 - 115 9.010830e+05 3.126594e+05 5.237755e+00 56164 1 - 135 4.574187e+05 3.126642e+05 6.242874e+00 64416 1 - 153 8.147263e+05 3.126649e+05 7.352225e+00 72729 1 - 170 4.383869e+05 3.126650e+05 8.413833e+00 80243 1 - 186 3.350605e+05 3.126650e+05 9.436581e+00 86958 1 - 248 7.271447e+05 3.126650e+05 1.454413e+01 114005 1 - 291 8.546395e+05 3.126650e+05 1.988850e+01 134334 1 - 330 4.186816e+05 3.126650e+05 2.502303e+01 147981 1 - 351 8.329132e+05 3.126650e+05 3.037550e+01 159069 1 - 372 7.599868e+05 3.126650e+05 3.576521e+01 168519 1 - 392 4.926184e+05 3.126650e+05 4.093949e+01 176876 1 - 400 1.508921e+05 3.126650e+05 4.210835e+01 179089 1 + 1 1.207737e+06 4.704379e+04 2.554393e-02 442 1 + 25 3.929922e+05 3.063274e+05 1.061369e+00 14662 1 + 45 4.918127e+04 3.122041e+05 2.580965e+00 23796 1 + 78 1.253479e+05 3.126516e+05 3.590325e+00 34434 1 + 102 8.652224e+05 3.126637e+05 4.656458e+00 44496 1 + 125 1.801968e+05 3.126649e+05 5.671608e+00 53066 1 + 147 5.729555e+05 3.126650e+05 6.730793e+00 60921 1 + 166 8.034395e+05 3.126650e+05 7.818564e+00 68563 1 + 182 4.155658e+05 3.126650e+05 8.862454e+00 75740 1 + 195 4.576289e+05 3.126650e+05 9.894990e+00 82221 1 + 252 3.248711e+05 3.126650e+05 1.499934e+01 106869 1 + 287 1.328155e+06 3.126650e+05 2.066777e+01 126938 1 + 317 5.672289e+05 3.126650e+05 2.578931e+01 140345 1 + 348 4.242395e+05 3.126650e+05 3.101493e+01 152136 1 + 370 7.337974e+05 3.126650e+05 3.615803e+01 162931 1 + 391 3.119868e+05 3.126650e+05 4.120424e+01 173599 1 + 400 2.543868e+05 3.126650e+05 4.317341e+01 177430 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.210835e+01 -total solves : 179089 +total time (s) : 4.317341e+01 +total solves : 177430 best bound : 3.126650e+05 -simulation ci : 3.267826e+05 ± 2.594249e+04 +simulation ci : 3.219091e+05 ± 2.988930e+04 numeric issues : 0 ------------------------------------------------------------------- @@ -407,14 +406,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.875000e+04 1.991887e+03 5.095005e-03 18 1 - 40 5.000000e+03 8.072917e+03 1.326931e-01 1320 1 + 1 9.375000e+03 1.991887e+03 5.294085e-03 18 1 + 40 1.875000e+03 8.072917e+03 1.307061e-01 1320 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.326931e-01 +total time (s) : 1.307061e-01 total solves : 1320 best bound : 8.072917e+03 -simulation ci : 5.763897e+03 ± 1.456483e+03 +simulation ci : 5.893516e+03 ± 1.634605e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -445,11 +444,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.499895e+01 1.562631e+00 1.627302e-02 6 1 - 40 8.333333e+00 8.333333e+00 6.881430e-01 246 1 + 1 2.499895e+01 1.562631e+00 1.644802e-02 6 1 + 40 8.333333e+00 8.333333e+00 6.874740e-01 246 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.881430e-01 +total time (s) : 6.874740e-01 total solves : 246 best bound : 8.333333e+00 simulation ci : 8.810723e+00 ± 8.167195e-01 @@ -482,14 +481,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 8.100000e+00 2.646923e-03 5 1 - 40 4.000000e+00 6.561000e+00 6.804099e-01 2790 1 + 1 0.000000e+00 1.000000e+01 6.258965e-03 17 1 + 40 0.000000e+00 6.561000e+00 7.429550e-01 2778 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.804099e-01 -total solves : 2790 +total time (s) : 7.429550e-01 +total solves : 2778 best bound : 6.561000e+00 -simulation ci : 5.875000e+00 ± 2.488335e+00 +simulation ci : 8.575000e+00 ± 3.244899e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -514,16 +513,14 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 8.505000e+03 3.261069e+03 2.157211e-02 39 1 - 245 7.200000e+03 5.092593e+03 1.023995e+00 11355 1 - 456 2.812500e+03 5.092593e+03 2.026049e+00 20184 1 - 494 9.453125e+03 5.092593e+03 2.184438e+00 21666 1 + 1 1.757812e+03 3.181818e+03 2.242303e-02 39 1 + 66 3.918750e+03 5.085973e+03 3.104100e-01 3474 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.184438e+00 -total solves : 21666 -best bound : 5.092593e+03 -simulation ci : 5.137930e+03 ± 3.372036e+02 +total time (s) : 3.104100e-01 +total solves : 3474 +best bound : 5.085973e+03 +simulation ci : 4.799623e+03 ± 8.870322e+02 numeric issues : 0 ------------------------------------------------------------------- @@ -548,14 +545,14 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 4.250000e+03 2.663219e+03 2.413511e-02 39 1 - 78 1.000000e+04 5.135984e+03 4.012141e-01 3942 1 + 1 8.687500e+03 2.123386e+03 2.354884e-02 39 1 + 52 1.562500e+03 5.135984e+03 2.527518e-01 2628 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.012141e-01 -total solves : 3942 +total time (s) : 2.527518e-01 +total solves : 2628 best bound : 5.135984e+03 -simulation ci : 5.347281e+03 ± 8.138305e+02 +simulation ci : 4.408869e+03 ± 1.067981e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -584,14 +581,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.375000e+04 5.735677e+03 3.618002e-03 12 1 - 40 1.125000e+04 1.062500e+04 6.784797e-02 642 1 + 1 1.562500e+04 3.958333e+03 4.122019e-03 12 1 + 40 2.437500e+04 1.062500e+04 6.924701e-02 642 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.784797e-02 +total time (s) : 6.924701e-02 total solves : 642 best bound : 1.062500e+04 -simulation ci : 1.148327e+04 ± 2.624878e+03 +simulation ci : 1.076202e+04 ± 2.592898e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -621,16 +618,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.613400e+06 8.488492e+04 1.760440e-01 43 1 - 3 1.433283e+06 3.495291e+05 2.528774e+00 433 1 - 8 3.192907e+05 4.044829e+05 3.745779e+00 660 1 - 10 2.681262e+05 4.142032e+05 4.672973e+00 794 1 + 1 7.393997e+04 4.830356e+04 8.193421e-02 19 1 + 4 2.709192e+06 3.546782e+05 1.305999e+00 224 1 + 5 1.044890e+06 3.860311e+05 2.347426e+00 407 1 + 8 5.564860e+05 4.165025e+05 3.568584e+00 592 1 + 10 8.739944e+04 4.209037e+05 3.781372e+00 626 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.672973e+00 -total solves : 794 -best bound : 4.142032e+05 -simulation ci : 1.208045e+06 ± 9.580998e+05 +total time (s) : 3.781372e+00 +total solves : 626 +best bound : 4.209037e+05 +simulation ci : 5.168449e+05 ± 5.175064e+05 numeric issues : 0 ------------------------------------------------------------------- @@ -662,17 +660,19 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.304777e+06 3.307234e+04 1.137197e+00 95 1 - 3 1.210637e+06 3.612899e+05 4.331871e+00 325 1 - 5 1.581004e+05 3.869996e+05 5.572880e+00 423 1 - 9 2.023526e+05 3.961325e+05 6.980048e+00 515 1 - 10 1.762718e+04 3.986865e+05 7.069830e+00 522 1 + 1 1.031538e+05 5.088391e+04 2.109609e-01 23 1 + 2 4.798288e+05 7.222559e+04 1.275940e+00 114 1 + 3 5.412067e+05 3.098500e+05 2.665144e+00 209 1 + 4 1.114535e+06 3.736372e+05 5.722338e+00 420 1 + 5 6.900717e+06 3.745743e+05 8.119072e+00 543 1 + 8 9.521998e+05 3.899629e+05 1.341809e+01 872 1 + 10 2.143338e+05 4.194698e+05 1.507519e+01 994 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 7.069830e+00 -total solves : 522 -best bound : 3.986865e+05 -simulation ci : 4.217323e+05 ± 3.226241e+05 +total time (s) : 1.507519e+01 +total solves : 994 +best bound : 4.194698e+05 +simulation ci : 1.159828e+06 ± 1.265207e+06 numeric issues : 0 ------------------------------------------------------------------- @@ -702,16 +702,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 7.378006e+04 6.623885e+04 1.027219e-01 15 1 - 5 1.866414e+05 1.813190e+05 1.201364e+00 132 1 - 9 1.347272e+06 3.470092e+05 3.516584e+00 369 1 - 10 2.020707e+05 3.574171e+05 3.800729e+00 399 1 + 1 8.713702e+05 4.874882e+04 1.606112e-01 27 1 + 4 1.214819e+06 3.956505e+05 2.514511e+00 231 1 + 8 3.098552e+06 4.127304e+05 4.020038e+00 387 1 + 10 9.173249e+05 4.228124e+05 5.520834e+00 534 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.800729e+00 -total solves : 399 -best bound : 3.574171e+05 -simulation ci : 3.047251e+05 ± 2.345430e+05 +total time (s) : 5.520834e+00 +total solves : 534 +best bound : 4.228124e+05 +simulation ci : 7.800745e+05 ± 6.001875e+05 numeric issues : 0 ------------------------------------------------------------------- @@ -735,14 +735,14 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.875000e+04 1.991887e+03 1.427794e-02 18 1 - 20 1.875000e+03 8.072917e+03 5.452800e-02 360 1 + 1 5.625000e+04 1.991887e+03 1.461601e-02 18 1 + 20 1.875000e+03 8.072917e+03 4.938006e-02 360 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 5.452800e-02 +total time (s) : 4.938006e-02 total solves : 360 best bound : 8.072917e+03 -simulation ci : 1.042034e+04 ± 3.235302e+03 +simulation ci : 8.927233e+03 ± 5.372277e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -766,11 +766,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.500000e+00 3.000000e+00 3.223896e-03 6 1 - 5 3.500000e+00 3.500000e+00 6.193876e-03 30 1 + 1 6.500000e+00 3.000000e+00 3.069878e-03 6 1 + 5 3.500000e+00 3.500000e+00 5.754948e-03 30 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 6.193876e-03 +total time (s) : 5.754948e-03 total solves : 30 best bound : 3.500000e+00 simulation ci : 4.100000e+00 ± 1.176000e+00 @@ -797,11 +797,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.500000e+00 1.100000e+01 3.274918e-03 6 1 - 5 5.500000e+00 1.100000e+01 5.815029e-03 30 1 + 1 6.500000e+00 1.100000e+01 3.180027e-03 6 1 + 5 5.500000e+00 1.100000e+01 5.640030e-03 30 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 5.815029e-03 +total time (s) : 5.640030e-03 total solves : 30 best bound : 1.100000e+01 simulation ci : 5.700000e+00 ± 3.920000e-01 diff --git a/previews/PR810/tutorial/arma/index.html b/previews/PR810/tutorial/arma/index.html index 2279dce71..d399baf13 100644 --- a/previews/PR810/tutorial/arma/index.html +++ b/previews/PR810/tutorial/arma/index.html @@ -44,36 +44,36 @@ end return inflow end
simulator (generic function with 1 method)

When called with no arguments, it produces a vector of inflows:

simulator()
3-element Vector{Float64}:
- 50.1
- 50.2
- 59.800000000000004
Warning

The simulator must return a Vector{Float64}, so it is limited to a uni-variate random variable. It is possible to do something similar for multi-variate random variable, but you'll have to manually construct the Markov transition matrix, and solution times scale poorly, even in the two-dimensional case.

The next step is to call SDDP.MarkovianGraph with our simulator. This function will attempt to fit a Markov chain to the stochastic process produced by your simulator. There are two key arguments:

graph = SDDP.MarkovianGraph(simulator; budget = 8, scenarios = 30)
Root
+ 40.0
+ 30.0
+ 30.1
Warning

The simulator must return a Vector{Float64}, so it is limited to a uni-variate random variable. It is possible to do something similar for multi-variate random variable, but you'll have to manually construct the Markov transition matrix, and solution times scale poorly, even in the two-dimensional case.

The next step is to call SDDP.MarkovianGraph with our simulator. This function will attempt to fit a Markov chain to the stochastic process produced by your simulator. There are two key arguments:

graph = SDDP.MarkovianGraph(simulator; budget = 8, scenarios = 30)
Root
  (0, 0.0)
 Nodes
- (1, 47.91850113808617)
- (2, 30.0)
- (2, 48.42875743645697)
- (2, 68.0147432335601)
- (3, 21.139230466372076)
- (3, 49.240974198872166)
- (3, 52.974739628861826)
- (3, 78.16582282799145)
+ (1, 46.92132893571418)
+ (2, 46.32243499322815)
+ (2, 68.37159320526222)
+ (2, 69.2)
+ (3, 44.00132036450157)
+ (3, 59.202641512018864)
+ (3, 60.62874467567708)
+ (3, 78.8)
 Arcs
- (0, 0.0) => (1, 47.91850113808617) w.p. 1.0
- (1, 47.91850113808617) => (2, 30.0) w.p. 0.13333333333333333
- (1, 47.91850113808617) => (2, 48.42875743645697) w.p. 0.5333333333333333
- (1, 47.91850113808617) => (2, 68.0147432335601) w.p. 0.3333333333333333
- (2, 30.0) => (3, 21.139230466372076) w.p. 1.0
- (2, 30.0) => (3, 49.240974198872166) w.p. 0.0
- (2, 30.0) => (3, 52.974739628861826) w.p. 0.0
- (2, 30.0) => (3, 78.16582282799145) w.p. 0.0
- (2, 48.42875743645697) => (3, 21.139230466372076) w.p. 0.1875
- (2, 48.42875743645697) => (3, 49.240974198872166) w.p. 0.625
- (2, 48.42875743645697) => (3, 52.974739628861826) w.p. 0.1875
- (2, 48.42875743645697) => (3, 78.16582282799145) w.p. 0.0
- (2, 68.0147432335601) => (3, 21.139230466372076) w.p. 0.0
- (2, 68.0147432335601) => (3, 49.240974198872166) w.p. 0.3
- (2, 68.0147432335601) => (3, 52.974739628861826) w.p. 0.2
- (2, 68.0147432335601) => (3, 78.16582282799145) w.p. 0.5

Here we can see we have created a MarkovianGraph with nodes like (2, 59.7). The first element of each node is the stage, and the second element is the inflow.

Create a SDDP.PolicyGraph using graph as follows:

model = SDDP.PolicyGraph(
+ (0, 0.0) => (1, 46.92132893571418) w.p. 1.0
+ (1, 46.92132893571418) => (2, 68.37159320526222) w.p. 0.23333333333333334
+ (1, 46.92132893571418) => (2, 46.32243499322815) w.p. 0.6333333333333333
+ (1, 46.92132893571418) => (2, 69.2) w.p. 0.13333333333333333
+ (2, 46.32243499322815) => (3, 59.202641512018864) w.p. 0.21052631578947367
+ (2, 46.32243499322815) => (3, 44.00132036450157) w.p. 0.7368421052631579
+ (2, 46.32243499322815) => (3, 60.62874467567708) w.p. 0.05263157894736842
+ (2, 46.32243499322815) => (3, 78.8) w.p. 0.0
+ (2, 68.37159320526222) => (3, 59.202641512018864) w.p. 0.42857142857142855
+ (2, 68.37159320526222) => (3, 44.00132036450157) w.p. 0.14285714285714285
+ (2, 68.37159320526222) => (3, 60.62874467567708) w.p. 0.2857142857142857
+ (2, 68.37159320526222) => (3, 78.8) w.p. 0.14285714285714285
+ (2, 69.2) => (3, 59.202641512018864) w.p. 0.0
+ (2, 69.2) => (3, 44.00132036450157) w.p. 0.0
+ (2, 69.2) => (3, 60.62874467567708) w.p. 0.5
+ (2, 69.2) => (3, 78.8) w.p. 0.5

Here we can see we have created a MarkovianGraph with nodes like (2, 59.7). The first element of each node is the stage, and the second element is the inflow.

Create a SDDP.PolicyGraph using graph as follows:

model = SDDP.PolicyGraph(
     graph;  # <--- New stuff
     sense = :Min,
     lower_bound = 0.0,
@@ -90,7 +90,7 @@
     # The new water balance constraint using the node:
     @constraint(sp, x.out == x.in - g_h - s + inflow)
 end
A policy graph with 8 nodes.
- Node indices: (1, 47.91850113808617), (2, 30.0), (2, 48.42875743645697), (2, 68.0147432335601), (3, 21.139230466372076), (3, 49.240974198872166), (3, 52.974739628861826), (3, 78.16582282799145)
+ Node indices: (1, 46.92132893571418), (2, 46.32243499322815), (2, 68.37159320526222), (2, 69.2), (3, 44.00132036450157), (3, 59.202641512018864), (3, 60.62874467567708), (3, 78.8)
 

When can this trick be used?

The Markov chain approach should be used when:

Vector auto-regressive models

The state-space expansion section assumed that the random variable was uni-variate. However, the approach naturally extends to vector auto-regressive models. For example, if inflow is a 2-dimensional vector, then we can model a vector auto-regressive model to it as follows:

\[inflow_{t} = A \times inflow_{t-1} + b + \varepsilon\]

Here A is a 2-by-2 matrix, and b and $\varepsilon$ are 2-by-1 vectors.

model = SDDP.LinearPolicyGraph(;
     stages = 3,
     sense = :Min,
@@ -130,4 +130,4 @@
     end
 end
A policy graph with 3 nodes.
  Node indices: 1, 2, 3
-
+ diff --git a/previews/PR810/tutorial/convex.cuts.json b/previews/PR810/tutorial/convex.cuts.json index 4fa2636b8..dec895e28 100644 --- a/previews/PR810/tutorial/convex.cuts.json +++ b/previews/PR810/tutorial/convex.cuts.json @@ -1 +1 @@ -[{"risk_set_cuts":[],"node":"1","single_cuts":[{"state":{"x":0.0},"intercept":243326.5932873207,"coefficients":{"x":-317616.6666712048}},{"state":{"x":0.0},"intercept":321380.875475521,"coefficients":{"x":-318249.99997748446}},{"state":{"x":0.0},"intercept":346483.98068044573,"coefficients":{"x":-318250.0000028512}},{"state":{"x":0.0},"intercept":354558.1883998782,"coefficients":{"x":-318250.0000165849}},{"state":{"x":0.0},"intercept":357155.19103196025,"coefficients":{"x":-318250.00002266077}},{"state":{"x":2.1491341273379417e-9},"intercept":357990.4949258979,"coefficients":{"x":-318250.0000249221}},{"state":{"x":2.339784569251895},"intercept":20179.679590501575,"coefficients":{"x":-1583.3333594441465}},{"state":{"x":2.0048370032484706},"intercept":41058.473036351555,"coefficients":{"x":-2084.7222959338355}},{"state":{"x":4.669890967482002},"intercept":54793.11561146089,"coefficients":{"x":-1980.4861982101913}},{"state":{"x":7.33494488161516},"intercept":68032.19399882176,"coefficients":{"x":-1881.4619003869643}},{"state":{"x":0.8728993428869813},"intercept":168186.57458887197,"coefficients":{"x":-102324.96301563102}},{"state":{"x":1.1777451506436714},"intercept":138158.75088564894,"coefficients":{"x":-33986.23885335506}},{"state":{"x":0.8427965773953587},"intercept":180512.1868575961,"coefficients":{"x":-112174.80901169186}},{"state":{"x":1.542796488674545},"intercept":130724.70284498815,"coefficients":{"x":-46917.665115821765}},{"state":{"x":1.207847915494837},"intercept":146439.7078488976,"coefficients":{"x":-46917.6652402858}},{"state":{"x":0.8728993432799144},"intercept":177135.41483783035,"coefficients":{"x":-112174.80901980447}},{"state":{"x":0.872899343279914},"intercept":177135.41483708614,"coefficients":{"x":-112174.80899305634}},{"state":{"x":0.8728994298405242},"intercept":177135.4051264201,"coefficients":{"x":-112174.80896661564}},{"state":{"x":0.17289943154177148},"intercept":371480.3778019629,"coefficients":{"x":-332473.92735413014}},{"state":{"x":0.872899342821537},"intercept":198831.87154252554,"coefficients":{"x":-116679.05269773738}},{"state":{"x":1.2078480020554363},"intercept":159750.37930819116,"coefficients":{"x":-116679.05252697827}},{"state":{"x":0.8728994298405238},"intercept":198831.86139343018,"coefficients":{"x":-116679.0526909371}},{"state":{"x":0.17289943250126852},"intercept":393955.61534688843,"coefficients":{"x":-332473.92746272276}},{"state":{"x":0.8728993437810341},"intercept":205949.0300916694,"coefficients":{"x":-116679.05273685853}},{"state":{"x":0.0},"intercept":462990.39998079184,"coefficients":{"x":-354565.03328869643}},{"state":{"x":0.17289948688966558},"intercept":405343.86094259284,"coefficients":{"x":-354565.03296133486}},{"state":{"x":0.872899398169431},"intercept":209555.30724264815,"coefficients":{"x":-123674.56948325901}},{"state":{"x":1.877744844407134},"intercept":125898.07499625174,"coefficients":{"x":-40746.94700590536}},{"state":{"x":1.5427964886745456},"intercept":142689.42111115973,"coefficients":{"x":-52700.14690281602}},{"state":{"x":1.2078479154948374},"intercept":170436.5768467136,"coefficients":{"x":-125815.46026125137}},{"state":{"x":0.8728993432799143},"intercept":212578.2856132213,"coefficients":{"x":-125815.46041362411}},{"state":{"x":0.872899342821537},"intercept":212578.28567090165,"coefficients":{"x":-125815.4604066796}},{"state":{"x":1.207847915494837},"intercept":170436.57684716248,"coefficients":{"x":-125815.4601964341}},{"state":{"x":0.8728993432799144},"intercept":212578.28561324032,"coefficients":{"x":-125815.46039973486}},{"state":{"x":0.8728993432799144},"intercept":212578.28561324027,"coefficients":{"x":-125815.46039973486}},{"state":{"x":0.8728993432799144},"intercept":212578.28561325016,"coefficients":{"x":-125815.46039278999}},{"state":{"x":0.8728995331449476},"intercept":212578.2617252837,"coefficients":{"x":-125815.46039973464}},{"state":{"x":0.0},"intercept":468705.6380365549,"coefficients":{"x":-357458.2288412342}},{"state":{"x":0.0},"intercept":469357.2319081951,"coefficients":{"x":-357458.22884185245}},{"state":{"x":0.0},"intercept":469563.56996761076,"coefficients":{"x":-357458.2288413445}},{"state":{"x":0.5078480047669497},"intercept":298797.4392903468,"coefficients":{"x":-330519.8699444624}},{"state":{"x":1.2078479160466338},"intercept":174304.5368139242,"coefficients":{"x":-118201.15859099312}},{"state":{"x":0.8728993438317114},"intercept":213895.8461217894,"coefficients":{"x":-118201.15892605369}},{"state":{"x":2.669897010783385},"intercept":108941.82577796208,"coefficients":{"x":-14486.533030036375}},{"state":{"x":5.334948546476229},"intercept":92373.35241924491,"coefficients":{"x":-6133.198423120689}},{"state":{"x":19.696152569240727},"intercept":25275.208199603367,"coefficients":{"x":-3525.512814929824}},{"state":{"x":20.396152502960078},"intercept":62058.70761418831,"coefficients":{"x":-1787.3888224504446}},{"state":{"x":19.696152563310562},"intercept":80709.58098899585,"coefficients":{"x":-1698.0193956457613}},{"state":{"x":20.396152497029913},"intercept":95912.14932728553,"coefficients":{"x":-1613.118437374026}},{"state":{"x":20.63109879997459},"intercept":110935.65877805717,"coefficients":{"x":-1532.4625253056406}},{"state":{"x":20.296152696853152},"intercept":125877.00337202901,"coefficients":{"x":-1455.839407560279}},{"state":{"x":18.996152629515183},"intercept":141211.8844173505,"coefficients":{"x":-1383.0474448620007}},{"state":{"x":19.69615256329972},"intercept":152901.089353621,"coefficients":{"x":-1313.8950799490517}},{"state":{"x":20.39615249701907},"intercept":163898.63872072197,"coefficients":{"x":-1248.2003329618778}},{"state":{"x":20.39615250340151},"intercept":175074.52819797007,"coefficients":{"x":-1185.7903219119573}},{"state":{"x":18.631098795322284},"intercept":187541.71119884035,"coefficients":{"x":-1126.5008120522423}},{"state":{"x":19.33109872911985},"intercept":196516.74354455707,"coefficients":{"x":-1070.1757773993913}},{"state":{"x":20.031098662884435},"intercept":204955.71301113482,"coefficients":{"x":-1016.6669942188599}},{"state":{"x":20.731098596444596},"intercept":212889.78830520104,"coefficients":{"x":-965.8336486152687}},{"state":{"x":20.396152497062793},"intercept":221297.96783601728,"coefficients":{"x":-917.5419703661668}},{"state":{"x":20.366044762367956},"intercept":228897.68268329476,"coefficients":{"x":-871.6648760281735}},{"state":{"x":21.066044693818288},"intercept":235409.8870118522,"coefficients":{"x":-828.0816362387438}},{"state":{"x":20.73109859630498},"intercept":242343.09014191155,"coefficients":{"x":-786.6775585080782}},{"state":{"x":20.39615249692318},"intercept":248824.742877228,"coefficients":{"x":-747.3436847343021}},{"state":{"x":20.73109859669887},"intercept":254407.05952881897,"coefficients":{"x":-709.9765045648231}},{"state":{"x":20.396152497317072},"intercept":260091.20462573506,"coefficients":{"x":-674.4776834734957}},{"state":{"x":21.400992265941074},"intercept":264542.73931320896,"coefficients":{"x":-640.7538032057735}},{"state":{"x":21.066044699855183},"intercept":269544.737288063,"coefficients":{"x":-608.7161170164252}},{"state":{"x":20.73109860234188},"intercept":274215.47237087914,"coefficients":{"x":-578.280314235441}},{"state":{"x":20.396152502960078},"intercept":278575.5667660003,"coefficients":{"x":-549.3663026399048}},{"state":{"x":19.69615256329972},"intercept":282834.92854005436,"coefficients":{"x":-521.8979917894967}},{"state":{"x":20.39615249701907},"intercept":286108.0855901117,"coefficients":{"x":-495.8030963024927}},{"state":{"x":20.39615249701907},"intercept":289506.84327253187,"coefficients":{"x":-471.0129455807456}},{"state":{"x":20.39615249692318},"intercept":292680.7496088133,"coefficients":{"x":-447.46230238892963}},{"state":{"x":20.73109860149386},"intercept":295501.41086492955,"coefficients":{"x":-425.08919127243973}},{"state":{"x":20.39615250211206},"intercept":298409.12449527514,"coefficients":{"x":-403.83473577973064}},{"state":{"x":19.931098866338363},"intercept":301167.52271123097,"coefficients":{"x":-383.64300316383054}},{"state":{"x":20.63109879997459},"intercept":303309.7365081265,"coefficients":{"x":-364.4608570055104}},{"state":{"x":20.296152696853152},"intercept":305673.4422004137,"coefficients":{"x":-346.2378182158352}},{"state":{"x":18.996152629515183},"intercept":308190.2288849311,"coefficients":{"x":-328.92593168247475}},{"state":{"x":19.69615256329972},"intercept":309896.48867601145,"coefficients":{"x":-312.479639494898}},{"state":{"x":20.39615249701907},"intercept":311491.9415442754,"coefficients":{"x":-296.85566157865543}},{"state":{"x":20.39615250340151},"intercept":313180.8115291217,"coefficients":{"x":-282.0128825544036}},{"state":{"x":18.63109880121957},"intercept":315225.23878207256,"coefficients":{"x":-267.91224292301774}},{"state":{"x":19.331098735017136},"intercept":316485.1687779313,"coefficients":{"x":-254.5166350849833}},{"state":{"x":20.03109866878172},"intercept":317661.3373077713,"coefficients":{"x":-241.79080745547373}},{"state":{"x":20.73109860234188},"intercept":318758.97069243237,"coefficients":{"x":-229.70127104521092}},{"state":{"x":20.396152502960078},"intercept":320008.8239901494,"coefficients":{"x":-218.21621055240067}},{"state":{"x":19.696152569240727},"intercept":321242.7668268227,"coefficients":{"x":-207.30540422329966}},{"state":{"x":20.396152502960078},"intercept":322107.8717951217,"coefficients":{"x":-196.94013803672055}},{"state":{"x":19.69615256329972},"intercept":323175.5843364728,"coefficients":{"x":-187.09313532034855}},{"state":{"x":20.39615249701907},"intercept":323912.7167354594,"coefficients":{"x":-177.73848257589756}},{"state":{"x":20.39615249692318},"intercept":324716.68764761527,"coefficients":{"x":-168.85156246947327}},{"state":{"x":20.731098603038923},"intercept":325407.04594185244,"coefficients":{"x":-160.40898828810276}},{"state":{"x":20.39615250365712},"intercept":326148.95521513966,"coefficients":{"x":-152.38854289786738}},{"state":{"x":18.296152695786002},"intercept":327088.97584794246,"coefficients":{"x":-144.76912031916635}},{"state":{"x":18.996152629515183},"intercept":327564.8308485157,"coefficients":{"x":-137.5306686669601}},{"state":{"x":19.69615256329972},"intercept":328005.67254045623,"coefficients":{"x":-130.65413940607326}},{"state":{"x":20.39615249701907},"intercept":328413.8126174867,"coefficients":{"x":-124.12143644234831}},{"state":{"x":20.39615249701907},"intercept":328873.95988360624,"coefficients":{"x":-117.91536862037646}},{"state":{"x":20.396152497379546},"intercept":329297.3525210336,"coefficients":{"x":-112.01960418009386}},{"state":{"x":16.50000033088374},"intercept":330101.1387850309,"coefficients":{"x":-106.41862912849244}},{"state":{"x":17.200000264702506},"intercept":330366.93731465784,"coefficients":{"x":-101.0977031284007}},{"state":{"x":17.900000198523973},"intercept":330611.1977741841,"coefficients":{"x":-96.0428230969224}},{"state":{"x":13.600000132336687},"intercept":331291.6128772135,"coefficients":{"x":-91.2406885180719}},{"state":{"x":14.30000006617005},"intercept":331474.35987223365,"coefficients":{"x":-86.6786602446932}},{"state":{"x":0.0},"intercept":603147.9067535622,"coefficients":{"x":-318249.99983112584}},{"state":{"x":0.0},"intercept":646129.2267041506,"coefficients":{"x":-318249.999985093}},{"state":{"x":0.4053389217246476},"intercept":530947.3457752822,"coefficients":{"x":-318249.9999387091}},{"state":{"x":0.0},"intercept":664388.2911812328,"coefficients":{"x":-318250.00000572397}},{"state":{"x":0.4053389229948096},"intercept":536817.0978676627,"coefficients":{"x":-318249.99994696246}},{"state":{"x":0.0},"intercept":666275.244066221,"coefficients":{"x":-318250.00000876724}},{"state":{"x":0.40533892299503654},"intercept":537423.6977524015,"coefficients":{"x":-318249.9999478684}},{"state":{"x":0.4053389229948096},"intercept":537471.1358545169,"coefficients":{"x":-318249.9999479379}},{"state":{"x":0.0},"intercept":666485.498053405,"coefficients":{"x":-318250.00000910304}},{"state":{"x":0.0},"intercept":666490.4004723884,"coefficients":{"x":-318250.0000091109}},{"state":{"x":0.0},"intercept":666491.9764559825,"coefficients":{"x":-318250.00000911346}},{"state":{"x":0.0},"intercept":666492.4830883816,"coefficients":{"x":-318250.0000091143}},{"state":{"x":0.4053389229948096},"intercept":537493.5337122726,"coefficients":{"x":-318249.9999479706}},{"state":{"x":0.0},"intercept":666492.6983062833,"coefficients":{"x":-318250.0000091146}},{"state":{"x":0.405338920680883},"intercept":537493.6036349086,"coefficients":{"x":-318249.99994797073}},{"state":{"x":1.0096879299886052},"intercept":412196.3842357022,"coefficients":{"x":-101756.61496145831}},{"state":{"x":3.674740091455679},"intercept":334992.5114678251,"coefficients":{"x":-1610.7815690449136}},{"state":{"x":3.339792153580428},"intercept":339785.6456341749,"coefficients":{"x":-2093.4141659444613}},{"state":{"x":8.004844219607834},"intercept":335500.6381519516,"coefficients":{"x":-1323.6961021582697}},{"state":{"x":7.669896262808283},"intercept":338684.86162578245,"coefficients":{"x":-1257.5113141013485}},{"state":{"x":10.334948037338638},"intercept":337958.2988996054,"coefficients":{"x":-1194.635754418941}},{"state":{"x":8.004845131837408},"intercept":342956.99577532423,"coefficients":{"x":-1134.903983623529}},{"state":{"x":7.669896754509488},"intercept":345290.12853432144,"coefficients":{"x":-1078.1588043652641}},{"state":{"x":7.33494837724298},"intercept":347362.85020265414,"coefficients":{"x":-1024.250887928279}},{"state":{"x":1.5889941158755545},"intercept":379693.1065704016,"coefficients":{"x":-33497.274259895916}},{"state":{"x":1.5889941159383802},"intercept":380350.7526417967,"coefficients":{"x":-33806.2614179786}},{"state":{"x":1.2239425656803269},"intercept":404775.88891764655,"coefficients":{"x":-102362.49997856867}},{"state":{"x":1.9239424776521648},"intercept":373593.0648357987,"coefficients":{"x":-33998.124896641406}},{"state":{"x":1.5889941158755547},"intercept":385107.41978866985,"coefficients":{"x":-33998.12501963757}},{"state":{"x":1.5889941159761194},"intercept":385111.0121778791,"coefficients":{"x":-33998.12501998669}},{"state":{"x":2.6698967273157264},"intercept":362353.26988628996,"coefficients":{"x":-12040.419067296903}},{"state":{"x":5.334948363765107},"intercept":352311.0029152378,"coefficients":{"x":-1907.67945103962}},{"state":{"x":1.759221903366944},"intercept":383228.19220408564,"coefficients":{"x":-36860.92423963604}},{"state":{"x":2.4592218174226512},"intercept":369670.75945150905,"coefficients":{"x":-13255.95930558219}},{"state":{"x":3.159221738005561},"intercept":364399.1518339331,"coefficients":{"x":-5781.053093930732}},{"state":{"x":3.1592217380055603},"intercept":365697.8427969636,"coefficients":{"x":-6661.720564663645}},{"state":{"x":3.159221987920335},"intercept":365965.1432201881,"coefficients":{"x":-4852.423074844738}},{"state":{"x":7.669897000526259},"intercept":351883.3421672906,"coefficients":{"x":-2187.431840117574}},{"state":{"x":10.334948363594984},"intercept":349899.0043170771,"coefficients":{"x":-2018.7068679114032}},{"state":{"x":3.7169527877050212},"intercept":367605.6769466021,"coefficients":{"x":-3125.8592934456638}},{"state":{"x":4.216937064905246},"intercept":368964.6812751569,"coefficients":{"x":-2929.7109109503053}},{"state":{"x":4.916937494253532},"intercept":369928.1946990668,"coefficients":{"x":-2783.225474392341}},{"state":{"x":5.616942270483904},"intercept":370889.9080782569,"coefficients":{"x":-2644.064230956694}},{"state":{"x":8.316946209350835},"intercept":366823.8842176815,"coefficients":{"x":-2511.8610328762416}},{"state":{"x":6.016949102370903},"intercept":375185.5548773162,"coefficients":{"x":-2386.2680082056895}},{"state":{"x":3.716952791707474},"intercept":382943.2934807496,"coefficients":{"x":-2461.303086207088}},{"state":{"x":3.016952847041831},"intercept":386853.29006344127,"coefficients":{"x":-2508.8253286498607}},{"state":{"x":3.7169527917150726},"intercept":387140.40564490284,"coefficients":{"x":-2538.9227162602133}},{"state":{"x":3.499992901698201},"intercept":389609.0060932242,"coefficients":{"x":-2557.9843897937944}},{"state":{"x":4.199992845358814},"intercept":389658.01802343415,"coefficients":{"x":-2430.085242992715}},{"state":{"x":6.899994132334475},"intercept":385127.91569020506,"coefficients":{"x":-2308.580995253654}},{"state":{"x":9.599997425990136},"intercept":381093.6486430251,"coefficients":{"x":-2193.1519546689374}},{"state":{"x":12.299998712991213},"intercept":377516.96720460855,"coefficients":{"x":-2083.494363815369}},{"state":{"x":10.699996079490234},"intercept":382873.2765396028,"coefficients":{"x":-1979.319653596925}},{"state":{"x":11.399996260719991},"intercept":383403.7122691258,"coefficients":{"x":-1880.3536815034972}},{"state":{"x":12.099996486828195},"intercept":383754.2157979705,"coefficients":{"x":-1786.3360066309297}},{"state":{"x":14.799996715861365},"intercept":380547.4159251307,"coefficients":{"x":-1697.019212809997}},{"state":{"x":15.4999969597364},"intercept":380756.5416006068,"coefficients":{"x":-1612.1682583095644}},{"state":{"x":13.199997202996762},"intercept":385418.36033957044,"coefficients":{"x":-1531.559852953383}},{"state":{"x":13.89999744622253},"intercept":385127.45492712624,"coefficients":{"x":-1454.9818673403624}},{"state":{"x":14.599997689423606},"intercept":384732.38878965774,"coefficients":{"x":-1382.2327835543497}},{"state":{"x":12.299997919832919},"intercept":388183.6687169903,"coefficients":{"x":-1313.121167267236}},{"state":{"x":9.999998116988975},"intercept":391263.04457364324,"coefficients":{"x":-1314.27301885384}},{"state":{"x":7.699998408406434},"intercept":394344.3683223736,"coefficients":{"x":-1361.9246903224341}},{"state":{"x":5.399998652110496},"intercept":397622.37498891377,"coefficients":{"x":-1549.6034411533474}},{"state":{"x":6.0999987159766835},"intercept":396764.6472973443,"coefficients":{"x":-1412.6917629780485}},{"state":{"x":8.799998747783624},"intercept":393051.36605373136,"coefficients":{"x":-1385.4125113663565}},{"state":{"x":6.499999040813737},"intercept":396279.1390451053,"coefficients":{"x":-1420.1294809940964}},{"state":{"x":9.199999084395053},"intercept":392594.4963568697,"coefficients":{"x":-1338.1293156223805}},{"state":{"x":11.899999327945494},"intercept":389106.1142850268,"coefficients":{"x":-1297.1894503122}},{"state":{"x":9.599999511017852},"intercept":392115.029638123,"coefficients":{"x":-1310.1920000640353}},{"state":{"x":12.299999802844466},"intercept":388661.235838346,"coefficients":{"x":-1240.565006308389}},{"state":{"x":1.565658808059855},"intercept":416725.06361852214,"coefficients":{"x":-33855.49937047141}},{"state":{"x":1.2307103978561627},"intercept":435385.4226296052,"coefficients":{"x":-102362.4997760972}},{"state":{"x":1.5656588080553226},"intercept":426988.74060442916,"coefficients":{"x":-33998.1247160056}},{"state":{"x":1.2307103978516303},"intercept":438565.67118297354,"coefficients":{"x":-33998.12511046178}},{"state":{"x":0.5307104865728393},"intercept":548321.2030207545,"coefficients":{"x":-328066.07286973123}},{"state":{"x":1.2307103978522496},"intercept":451641.8287761368,"coefficients":{"x":-105470.92299902735}},{"state":{"x":1.2307104269003755},"intercept":451641.82571248413,"coefficients":{"x":-105470.92296491361}},{"state":{"x":0.5307105175654825},"intercept":566034.4473526883,"coefficients":{"x":-328066.0728961492}},{"state":{"x":1.230710428844893},"intercept":457251.0227642181,"coefficients":{"x":-105470.92297602787}},{"state":{"x":3.3397936263317387},"intercept":402083.2155468225,"coefficients":{"x":-2074.040902420916}},{"state":{"x":3.0048452098439062},"intercept":404267.35810934345,"coefficients":{"x":-2263.5592334818657}},{"state":{"x":5.66989679127422},"intercept":399566.162279098,"coefficients":{"x":-2066.9208450817678}},{"state":{"x":5.3349483952757835},"intercept":401305.0183457929,"coefficients":{"x":-1963.5748162828409}}],"multi_cuts":[]}] \ No newline at end of file +[{"risk_set_cuts":[],"node":"1","single_cuts":[{"state":{"x":0.3349461006592544},"intercept":136942.1292824911,"coefficients":{"x":-317616.66663406935}},{"state":{"x":0.0},"intercept":321380.8754720096,"coefficients":{"x":-318249.99997748446}},{"state":{"x":2.6698936309934798},"intercept":19474.869495437088,"coefficients":{"x":-1583.3333351751157}},{"state":{"x":5.3349460650066005},"intercept":34641.27228177437,"coefficients":{"x":-1636.1110825307264}},{"state":{"x":0.8780759603012149},"intercept":131868.37288604362,"coefficients":{"x":-102247.26856962155}},{"state":{"x":2.6698968518611608},"intercept":58489.97777815811,"coefficients":{"x":-2101.4352621647336}},{"state":{"x":5.334948426053756},"intercept":71316.55766177658,"coefficients":{"x":-1996.3635187919353}},{"state":{"x":3.6747675526452874},"intercept":92063.17002039579,"coefficients":{"x":-2215.5150729205866}},{"state":{"x":3.339816664166303},"intercept":109418.87776726973,"coefficients":{"x":-2353.1595503740155}},{"state":{"x":3.0048614057669236},"intercept":126007.4688314642,"coefficients":{"x":-2440.334393857805}},{"state":{"x":2.669906147213017},"intercept":141835.312201918,"coefficients":{"x":-2495.545135462563}},{"state":{"x":5.334950888432223},"intercept":149634.11617119063,"coefficients":{"x":-2370.7678912979463}},{"state":{"x":3.000000135041293},"intercept":168617.1458614656,"coefficients":{"x":-2451.4862939390778}},{"state":{"x":3.700000115732334},"intercept":179749.47511526998,"coefficients":{"x":-2328.9120083857383}},{"state":{"x":4.40000011042157},"intercept":190364.00306792068,"coefficients":{"x":-2212.4664250795668}},{"state":{"x":2.1000001047581245},"intercept":207035.54351329504,"coefficients":{"x":-2351.2287486407067}},{"state":{"x":4.800000093773008},"intercept":211811.3664675948,"coefficients":{"x":-2233.667325837807}},{"state":{"x":7.500000087934583},"intercept":216608.99275518634,"coefficients":{"x":-2121.9839671177815}},{"state":{"x":5.200000071435476},"intercept":231493.7263717324,"coefficients":{"x":-2015.8847804412837}},{"state":{"x":2.90000006532977},"intercept":245381.36146667087,"coefficients":{"x":-2226.7267650633403}},{"state":{"x":3.600000060610418},"intercept":252746.65378319926,"coefficients":{"x":-2115.3904456765963}},{"state":{"x":4.300000055234724},"intercept":259778.93797005236,"coefficients":{"x":-2009.62094109478}},{"state":{"x":2.000000050045745},"intercept":272492.577026523,"coefficients":{"x":-2222.75993324934}},{"state":{"x":4.700000040134498},"intercept":274280.32084274775,"coefficients":{"x":-2111.621951192303}},{"state":{"x":7.400000034365804},"intercept":276225.0333314467,"coefficients":{"x":-2006.040861188034}},{"state":{"x":5.100000017934235},"intercept":287835.24237339315,"coefficients":{"x":-1905.7388255275398}},{"state":{"x":2.8000000118876827},"intercept":298492.79461399803,"coefficients":{"x":-1840.3016470341322}},{"state":{"x":3.500000009160809},"intercept":303088.51104261854,"coefficients":{"x":-1798.8577326206546}},{"state":{"x":1.200000003753896},"intercept":313261.0470107805,"coefficients":{"x":-2152.9716486091497}},{"state":{"x":3.9000000123044636},"intercept":313165.06466098723,"coefficients":{"x":-2045.3230846667354}},{"state":{"x":4.600000007142061},"intercept":317198.6162534085,"coefficients":{"x":-1943.0569399640476}},{"state":{"x":2.300000001282779},"intercept":326670.1814134054,"coefficients":{"x":-2198.634655995268}},{"state":{"x":0.0},"intercept":558110.4760392238,"coefficients":{"x":-318312.9009582543}},{"state":{"x":0.0},"intercept":629980.2976227559,"coefficients":{"x":-318312.90097985027}},{"state":{"x":0.5036698365423513},"intercept":492472.1622827091,"coefficients":{"x":-318312.9009427439}},{"state":{"x":0.0},"intercept":660040.3013253043,"coefficients":{"x":-318312.9009993037}},{"state":{"x":0.0},"intercept":662339.9013992378,"coefficients":{"x":-318312.9010019016}},{"state":{"x":0.5036698378026048},"intercept":502745.3469244192,"coefficients":{"x":-318312.90094520414}},{"state":{"x":0.703669842088614},"intercept":440106.756092526,"coefficients":{"x":-102445.32172738806}},{"state":{"x":1.403669853831507},"intercept":368910.14844124124,"coefficients":{"x":-34087.254521456794}},{"state":{"x":2.103669851057153},"intercept":345432.0180376103,"coefficients":{"x":-12440.532454760008}},{"state":{"x":2.8036698400448126},"intercept":337156.26227494894,"coefficients":{"x":-5585.7366640633045}},{"state":{"x":0.503669837312727},"intercept":513915.1743589961,"coefficients":{"x":-329229.78048806585}},{"state":{"x":0.5036698373127269},"intercept":519193.4789638537,"coefficients":{"x":-329229.78048830136}},{"state":{"x":0.5036698301791788},"intercept":520864.94443734013,"coefficients":{"x":-329229.7804884847}},{"state":{"x":1.4000000645606712},"intercept":379192.0570613955,"coefficients":{"x":-109964.41528192026}},{"state":{"x":2.100000062069861},"intercept":352375.29088952445,"coefficients":{"x":-37540.88141451847}},{"state":{"x":2.800000051084744},"intercept":341980.1454233126,"coefficients":{"x":-14606.762369548125}},{"state":{"x":5.500000048357871},"intercept":327157.7346034403,"coefficients":{"x":-3415.0509496646837}},{"state":{"x":8.20000004230735},"intercept":322780.7119090722,"coefficients":{"x":-3244.2984186050135}},{"state":{"x":10.90000002543702},"intercept":319325.47164720396,"coefficients":{"x":-3082.083509435186}},{"state":{"x":11.600000023062808},"intercept":322566.7367377769,"coefficients":{"x":-2927.979344789459}},{"state":{"x":9.300000006282529},"intercept":334195.4380884846,"coefficients":{"x":-2781.5803912792817}},{"state":{"x":3.6468483769983244},"intercept":353808.6175709205,"coefficients":{"x":-2711.6676094228665}},{"state":{"x":1.346848260525079},"intercept":398992.88732290023,"coefficients":{"x":-106064.79176618376}},{"state":{"x":2.0468481724187764},"intercept":374074.3485823446,"coefficients":{"x":-35395.878785919056}},{"state":{"x":1.3468482788706702},"intercept":399080.0168153781,"coefficients":{"x":-106064.78779446988}},{"state":{"x":2.0468481907643685},"intercept":374175.93907303776,"coefficients":{"x":-35395.87420737613}},{"state":{"x":2.2468485650726504},"intercept":371532.5335584715,"coefficients":{"x":-13017.388300775554}},{"state":{"x":2.946848508331786},"intercept":363921.90508133225,"coefficients":{"x":-5930.8677787337965}},{"state":{"x":3.6468483953316864},"intercept":360931.5453274293,"coefficients":{"x":-4614.911080907044}},{"state":{"x":1.3468482788584406},"intercept":403428.1338813512,"coefficients":{"x":-109839.6533176166}},{"state":{"x":2.046848190752137},"intercept":378874.3797037554,"coefficients":{"x":-37705.33373726634}},{"state":{"x":2.0468481907521365},"intercept":379457.0388280902,"coefficients":{"x":-14862.799698935598}},{"state":{"x":2.0468481724187755},"intercept":379457.0391007189,"coefficients":{"x":-14862.799720317302}},{"state":{"x":1.3468482788584406},"intercept":406849.19506868045,"coefficients":{"x":-40950.498882751264}},{"state":{"x":2.046848190752137},"intercept":379957.7157668303,"coefficients":{"x":-15890.43522255298}},{"state":{"x":2.046848190752137},"intercept":379957.71576674306,"coefficients":{"x":-15890.435221286212}},{"state":{"x":2.046848190752137},"intercept":379957.7157672054,"coefficients":{"x":-15890.435238509317}},{"state":{"x":2.046848190752137},"intercept":379957.7157677949,"coefficients":{"x":-15890.435255822129}},{"state":{"x":2.0468481907521365},"intercept":379957.71576778585,"coefficients":{"x":-15890.435255985869}},{"state":{"x":2.0468481907478586},"intercept":379957.71576855954,"coefficients":{"x":-15890.43527274462}},{"state":{"x":3.7468491366806744},"intercept":361944.4790219862,"coefficients":{"x":-7117.941731483146}},{"state":{"x":4.446849019958376},"intercept":359417.42158123764,"coefficients":{"x":-4665.403427236287}},{"state":{"x":2.146848902242962},"intercept":379731.9501189258,"coefficients":{"x":-15922.420126046924}},{"state":{"x":4.846848887371507},"intercept":358853.13598645304,"coefficients":{"x":-3904.7555461261295}},{"state":{"x":7.546848771271163},"intercept":350875.39300482336,"coefficients":{"x":-3709.51779124066}},{"state":{"x":5.246848659381619},"intercept":361720.47541846894,"coefficients":{"x":-3524.041930016228}},{"state":{"x":5.9468485436814},"intercept":361862.31617371703,"coefficients":{"x":-3347.8398567358377}},{"state":{"x":3.646848424926871},"intercept":371928.9897509217,"coefficients":{"x":-3180.4479230481675}},{"state":{"x":4.346848308453625},"intercept":372322.03816210665,"coefficients":{"x":-3021.4255614809217}},{"state":{"x":2.0468481907643694},"intercept":389722.2419615366,"coefficients":{"x":-14881.227671425671}},{"state":{"x":2.246848546739285},"intercept":386745.9911164369,"coefficients":{"x":-14881.22740689972}},{"state":{"x":2.946848489998425},"intercept":381634.0141023785,"coefficients":{"x":-6625.958282365591}},{"state":{"x":3.6468483769983244},"intercept":378675.72311648633,"coefficients":{"x":-4011.78970910257}},{"state":{"x":1.3468482605250782},"intercept":416992.936808322,"coefficients":{"x":-38151.17735195091}},{"state":{"x":2.046848172418776},"intercept":394866.2521158877,"coefficients":{"x":-14622.00638753965}},{"state":{"x":1.3468482788584408},"intercept":417775.4964014627,"coefficients":{"x":-40683.259178456676}},{"state":{"x":2.0468481907521374},"intercept":395114.06260526745,"coefficients":{"x":-15423.832298522775}},{"state":{"x":2.0468481709364204},"intercept":395114.0629131231,"coefficients":{"x":-15423.832302634608}},{"state":{"x":1.4000008583591912},"intercept":415613.07624392706,"coefficients":{"x":-40683.25900689143}},{"state":{"x":2.100000770229338},"intercept":394294.24614745163,"coefficients":{"x":-15423.832258965653}},{"state":{"x":2.800000689031894},"intercept":385764.7233516142,"coefficients":{"x":-7425.013749103046}},{"state":{"x":3.500000577226352},"intercept":381285.1480698243,"coefficients":{"x":-4892.054500306495}},{"state":{"x":4.200000461209164},"intercept":378350.0893803682,"coefficients":{"x":-4055.0859893779707}},{"state":{"x":6.900000343688283},"intercept":369301.53791898,"coefficients":{"x":-3525.0059222937784}},{"state":{"x":4.600000231781788},"intercept":379108.45556607604,"coefficients":{"x":-3348.7557092450884}},{"state":{"x":7.300000114181516},"intercept":372406.81259023875,"coefficients":{"x":-3181.317949805615}},{"state":{"x":51.49999903826283},"intercept":241306.07992051452,"coefficients":{"x":-3022.2520553937375}},{"state":{"x":47.19999909056194},"intercept":262936.6380995994,"coefficients":{"x":-2871.1394561034376}},{"state":{"x":45.89999913388035},"intercept":274675.2673508149,"coefficients":{"x":-2727.5824868648847}},{"state":{"x":41.59999919619657},"intercept":293373.27665759786,"coefficients":{"x":-2591.2033664292976}},{"state":{"x":37.29999925068554},"intercept":310531.7714338467,"coefficients":{"x":-2461.6432017737925}},{"state":{"x":35.99999931012351},"intercept":319242.2749511462,"coefficients":{"x":-2338.561045453153}},{"state":{"x":31.699999370009994},"intercept":333987.2722034834,"coefficients":{"x":-2221.6329974338596}},{"state":{"x":27.39999942993933},"intercept":347476.63859481015,"coefficients":{"x":-2110.5513524662}},{"state":{"x":26.099999497611176},"intercept":353784.00334931054,"coefficients":{"x":-2005.0237899799176}},{"state":{"x":21.799999562577185},"intercept":365323.232403437,"coefficients":{"x":-1904.7726062546774}},{"state":{"x":20.499999628722914},"intercept":370234.68072627974,"coefficients":{"x":-1809.5339841557325}},{"state":{"x":16.199999690790506},"intercept":379729.14265913697,"coefficients":{"x":-1719.0572947801213}},{"state":{"x":14.899999762175868},"intercept":383279.5530812406,"coefficients":{"x":-1633.1044409520264}},{"state":{"x":15.599999849907961},"intercept":383252.9958005033,"coefficients":{"x":-1551.449229187728}},{"state":{"x":11.299999937398722},"intercept":390470.57384436246,"coefficients":{"x":-1473.876784222983}},{"state":{"x":1.6639678013547445},"intercept":418985.42819701746,"coefficients":{"x":-36223.84910415766}},{"state":{"x":1.6639678662199204},"intercept":419182.3493176552,"coefficients":{"x":-36365.88947963205}},{"state":{"x":5.004845192397712},"intercept":400021.9200435623,"coefficients":{"x":-1591.3533180873926}},{"state":{"x":7.669896803285454},"intercept":396147.92046746507,"coefficients":{"x":-1474.5847898695058}},{"state":{"x":10.334948401557911},"intercept":392479.46507648623,"coefficients":{"x":-1400.8555765324586}},{"state":{"x":0.0},"intercept":728195.6923745442,"coefficients":{"x":-328815.86503230315}},{"state":{"x":0.6620729522181045},"intercept":523639.6879937273,"coefficients":{"x":-328815.8645862137}},{"state":{"x":1.3620728641168294},"intercept":444688.50162413827,"coefficients":{"x":-105708.35699894854}},{"state":{"x":2.062072777966979},"intercept":418622.3033351215,"coefficients":{"x":-34928.24155374676}},{"state":{"x":2.0620728214821753},"intercept":418819.20612701017,"coefficients":{"x":-35057.645825503285}},{"state":{"x":1.6970212803820686},"intercept":431619.5109092581,"coefficients":{"x":-35057.646361829}},{"state":{"x":2.397021194225778},"intercept":414184.9605069753,"coefficients":{"x":-12555.516603345957}},{"state":{"x":2.062072777964003},"intercept":419297.27051429905,"coefficients":{"x":-35057.645914689834}},{"state":{"x":1.6970213135936614},"intercept":433042.995977262,"coefficients":{"x":-38083.55989673205}},{"state":{"x":2.39702122743737},"intercept":415079.1920707474,"coefficients":{"x":-13643.127263365785}},{"state":{"x":2.062072811175594},"intercept":419644.2070856254,"coefficients":{"x":-13643.12781633642}},{"state":{"x":1.3620728633514623},"intercept":447559.0418182394,"coefficients":{"x":-109078.68026643203}},{"state":{"x":2.062072777201613},"intercept":420201.5460360595,"coefficients":{"x":-36124.91509768983}},{"state":{"x":2.397021194225777},"intercept":415184.0313090985,"coefficients":{"x":-13022.889819607357}},{"state":{"x":2.062072777964002},"intercept":420235.08207383566,"coefficients":{"x":-36124.9192465055}},{"state":{"x":1.6970213087557102},"intercept":434575.6578783764,"coefficients":{"x":-39298.830445427964}},{"state":{"x":2.3970212225994185},"intercept":415593.3422111315,"coefficients":{"x":-14027.962929316931}},{"state":{"x":2.0620728063376434},"intercept":420291.98619646876,"coefficients":{"x":-14027.969128501001}},{"state":{"x":3.0096904157774693},"intercept":409000.15555515944,"coefficients":{"x":-6025.521126872658}},{"state":{"x":2.674741997850939},"intercept":412578.15187304444,"coefficients":{"x":-14986.044378473222}},{"state":{"x":5.339793579400482},"intercept":400903.6482018889,"coefficients":{"x":-2050.285235441011}},{"state":{"x":5.004845179416466},"intercept":403027.6250147993,"coefficients":{"x":-2248.5139499376096}},{"state":{"x":4.669896781142732},"intercept":405055.5368059472,"coefficients":{"x":-2374.058880365682}},{"state":{"x":7.334948390647993},"intercept":399981.9253341077,"coefficients":{"x":-2136.9039941044643}},{"state":{"x":6.5999997544395725},"intercept":402558.6634300243,"coefficients":{"x":-2030.0588254397198}},{"state":{"x":9.29999991238231},"intercept":398270.2567201516,"coefficients":{"x":-1928.5558949432059}},{"state":{"x":6.6747433697898355},"intercept":404062.1474663188,"coefficients":{"x":-1854.7520748679794}},{"state":{"x":11.339794658371689},"intercept":396353.5660152701,"coefficients":{"x":-1762.0144870911047}},{"state":{"x":11.004845943882653},"intercept":397605.55075830827,"coefficients":{"x":-1673.913780034843}},{"state":{"x":10.669897229475215},"intercept":398571.7477755933,"coefficients":{"x":-1590.218116175029}},{"state":{"x":10.334948515141459},"intercept":399281.70882117236,"coefficients":{"x":-1537.2107709841644}}],"multi_cuts":[]}] \ No newline at end of file diff --git a/previews/PR810/tutorial/decision_hazard/index.html b/previews/PR810/tutorial/decision_hazard/index.html index d43ec1112..209e1f6bc 100644 --- a/previews/PR810/tutorial/decision_hazard/index.html +++ b/previews/PR810/tutorial/decision_hazard/index.html @@ -74,4 +74,4 @@ end end -train_and_compute_cost(decision_hazard_2)
Cost = $410.0

Now we find that the cost of choosing the thermal generation before observing the inflow adds a much more reasonable cost of $10.

Summary

To summarize, the difference between here-and-now and wait-and-see variables is a modeling choice.

To create a here-and-now decision, add it as a state variable to the previous stage

In some cases, you'll need to add an additional "first-stage" problem to enable the model to choose an optimal value for the here-and-now decision variable. You do not need to do this if the first stage is deterministic. You must make sure that the subproblem is feasible for all possible incoming values of the here-and-now decision variable.

+train_and_compute_cost(decision_hazard_2)
Cost = $410.0

Now we find that the cost of choosing the thermal generation before observing the inflow adds a much more reasonable cost of $10.

Summary

To summarize, the difference between here-and-now and wait-and-see variables is a modeling choice.

To create a here-and-now decision, add it as a state variable to the previous stage

In some cases, you'll need to add an additional "first-stage" problem to enable the model to choose an optimal value for the here-and-now decision variable. You do not need to do this if the first stage is deterministic. You must make sure that the subproblem is feasible for all possible incoming values of the here-and-now decision variable.

diff --git a/previews/PR810/tutorial/example_milk_producer/1f5e3ff8.svg b/previews/PR810/tutorial/example_milk_producer/1f5e3ff8.svg deleted file mode 100644 index 259f9451d..000000000 --- a/previews/PR810/tutorial/example_milk_producer/1f5e3ff8.svg +++ /dev/null @@ -1,544 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR810/tutorial/example_milk_producer/904a9542.svg b/previews/PR810/tutorial/example_milk_producer/904a9542.svg new file mode 100644 index 000000000..6753b7c1a --- /dev/null +++ b/previews/PR810/tutorial/example_milk_producer/904a9542.svg @@ -0,0 +1,544 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_milk_producer/7850db0e.svg b/previews/PR810/tutorial/example_milk_producer/db517f47.svg similarity index 60% rename from previews/PR810/tutorial/example_milk_producer/7850db0e.svg rename to previews/PR810/tutorial/example_milk_producer/db517f47.svg index f6b099b76..4a30c489e 100644 --- a/previews/PR810/tutorial/example_milk_producer/7850db0e.svg +++ b/previews/PR810/tutorial/example_milk_producer/db517f47.svg @@ -1,142 +1,144 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_milk_producer/ea7e99eb.svg b/previews/PR810/tutorial/example_milk_producer/ea7e99eb.svg deleted file mode 100644 index 3ac457cdd..000000000 --- a/previews/PR810/tutorial/example_milk_producer/ea7e99eb.svg +++ /dev/null @@ -1,625 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR810/tutorial/example_milk_producer/ff3c9001.svg b/previews/PR810/tutorial/example_milk_producer/ff3c9001.svg new file mode 100644 index 000000000..b23b56f3a --- /dev/null +++ b/previews/PR810/tutorial/example_milk_producer/ff3c9001.svg @@ -0,0 +1,625 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_milk_producer/index.html b/previews/PR810/tutorial/example_milk_producer/index.html index fd0e7894e..0ca54d85a 100644 --- a/previews/PR810/tutorial/example_milk_producer/index.html +++ b/previews/PR810/tutorial/example_milk_producer/index.html @@ -18,18 +18,18 @@ end simulator()
12-element Vector{Float64}:
- 4.0126886398543276
- 4.657954479349959
- 4.717295048414882
- 4.315709546457793
- 3.856409750712577
- 3.690410786101299
- 4.101853155188372
- 4.264627725032998
- 4.38107934574338
- 5.063299457322021
- 4.95404985226321
- 4.903175702834322

It may be helpful to visualize a number of simulations of the price process:

plot = Plots.plot(
+ 4.656953599739544
+ 4.811127185868925
+ 4.275806768492164
+ 4.173321058323062
+ 4.335185941444355
+ 4.779874307816746
+ 4.2494157871780365
+ 4.3233493291165965
+ 4.579623834332571
+ 4.080110179566259
+ 4.077588528187847
+ 3.8321210612515815

It may be helpful to visualize a number of simulations of the price process:

plot = Plots.plot(
     [simulator() for _ in 1:500];
     color = "gray",
     opacity = 0.2,
@@ -38,7 +38,7 @@
     ylabel = "Price [\$/kg]",
     xlims = (1, 12),
     ylims = (3, 9),
-)
Example block output

The prices gradually revert to the mean of $6/kg, and there is high volatility.

We can't incorporate this price process directly into SDDP.jl, but we can fit a SDDP.MarkovianGraph directly from the simulator:

graph = SDDP.MarkovianGraph(simulator; budget = 30, scenarios = 10_000);

Here budget is the number of nodes in the policy graph, and scenarios is the number of simulations to use when estimating the transition probabilities.

The graph contains too many nodes to be show, but we can plot it:

for ((t, price), edges) in graph.nodes
+)
Example block output

The prices gradually revert to the mean of $6/kg, and there is high volatility.

We can't incorporate this price process directly into SDDP.jl, but we can fit a SDDP.MarkovianGraph directly from the simulator:

graph = SDDP.MarkovianGraph(simulator; budget = 30, scenarios = 10_000);

Here budget is the number of nodes in the policy graph, and scenarios is the number of simulations to use when estimating the transition probabilities.

The graph contains too many nodes to be show, but we can plot it:

for ((t, price), edges) in graph.nodes
     for ((t′, price′), probability) in edges
         Plots.plot!(
             plot,
@@ -50,7 +50,7 @@
     end
 end
 
-plot
Example block output

That looks okay. Try changing budget and scenarios to see how different Markovian policy graphs can be created.

Model

Now that we have a Markovian graph, we can build the model. See if you can work out how we arrived at this formulation by reading the background description. Do all the variables and constraints make sense?

model = SDDP.PolicyGraph(
+plot
Example block output

That looks okay. Try changing budget and scenarios to see how different Markovian policy graphs can be created.

Model

Now that we have a Markovian graph, we can build the model. See if you can work out how we arrived at this formulation by reading the background description. Do all the variables and constraints make sense?

model = SDDP.PolicyGraph(
     graph;
     sense = :Max,
     upper_bound = 1e2,
@@ -111,7 +111,7 @@
     end
     return
 end
A policy graph with 30 nodes.
- Node indices: (1, 4.576449346953413), ..., (12, 7.667272652739522)
+ Node indices: (1, 4.578812406195716), ..., (12, 7.615134118569353)
 

Training a policy

Now we have a model, we train a policy. The SDDP.SimulatorSamplingScheme is used in the forward pass. It generates an out-of-sample sequence of prices using simulator and traverses the closest sequence of nodes in the policy graph. When calling SDDP.parameterize for each subproblem, it uses the new out-of-sample price instead of the price associated with the Markov node.

SDDP.train(
     model;
     time_limit = 20,
@@ -123,7 +123,7 @@
 problem
   nodes           : 30
   state variables : 5
-  scenarios       : 8.04688e+11
+  scenarios       : 1.00781e+12
   existing cuts   : false
 options
   solver          : serial mode
@@ -142,31 +142,31 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1  -2.446694e+01  6.206461e+01  1.255160e+00       162   1
-        58   1.282673e+01  7.838999e+00  2.271558e+00      9396   1
-       108   8.447310e+00  7.830674e+00  3.283115e+00     17496   1
-       150   9.088225e+00  7.828798e+00  4.290896e+00     24300   1
-       190   1.057515e+01  7.827192e+00  5.306258e+00     30780   1
-       227   8.600332e+00  7.826181e+00  6.324924e+00     36774   1
-       261   9.488365e+00  7.825666e+00  7.331886e+00     42282   1
-       294   8.636328e+00  7.824883e+00  8.337678e+00     47628   1
-       326   8.814296e+00  7.824712e+00  9.359044e+00     52812   1
-       465   8.803878e+00  7.824560e+00  1.437870e+01     75330   1
-       589   8.273399e+00  7.824390e+00  1.937982e+01     95418   1
-       604   9.951259e+00  7.824390e+00  2.002007e+01     97848   1
+         1  -4.199992e+01  5.821554e+01  1.260870e+00       162   1
+        60   1.033168e+01  7.916707e+00  2.262791e+00      9720   1
+       107   9.111379e+00  7.910481e+00  3.275479e+00     17334   1
+       152   7.959466e+00  7.904751e+00  4.281190e+00     24624   1
+       194   6.940984e+00  7.904578e+00  5.281531e+00     31428   1
+       230   8.432877e+00  7.904218e+00  6.304367e+00     37260   1
+       266   9.657901e+00  7.903732e+00  7.321733e+00     43092   1
+       299   9.353248e+00  7.903401e+00  8.336339e+00     48438   1
+       332   8.753305e+00  7.903401e+00  9.357229e+00     53784   1
+       469   9.297445e+00  7.902829e+00  1.439343e+01     75978   1
+       589   8.913457e+00  7.902545e+00  1.939896e+01     95418   1
+       602   1.036420e+01  7.902495e+00  2.001406e+01     97524   1
 -------------------------------------------------------------------
 status         : time_limit
-total time (s) : 2.002007e+01
-total solves   : 97848
-best bound     :  7.824390e+00
-simulation ci  :  8.882771e+00 ± 2.950991e-01
+total time (s) : 2.001406e+01
+total solves   : 97524
+best bound     :  7.902495e+00
+simulation ci  :  8.773997e+00 ± 3.705325e-01
 numeric issues : 0
 -------------------------------------------------------------------
Warning

We're intentionally terminating the training early so that the documentation doesn't take too long to build. If you run this example locally, increase the time limit.

Simulating the policy

When simulating the policy, we can also use the SDDP.SimulatorSamplingScheme.

simulations = SDDP.simulate(
     model,
     200,
     Symbol[:x_stock, :u_forward_sell, :u_spot_sell, :u_spot_buy];
     sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),
-);

To show how the sampling scheme uses the new out-of-sample price instead of the price associated with the Markov node, compare the index of the Markov state visited in stage 12 of the first simulation:

simulations[1][12][:node_index]
(12, 5.966151332266203)

to the realization of the noise (price, ω) passed to SDDP.parameterize:

simulations[1][12][:noise_term]
(5.902693885100739, 0.175)

Visualizing the policy

Finally, we can plot the policy to gain insight (although note that we terminated the training early, so we should run the re-train the policy for more iterations before making too many judgements).

plot = Plots.plot(
+);

To show how the sampling scheme uses the new out-of-sample price instead of the price associated with the Markov node, compare the index of the Markov state visited in stage 12 of the first simulation:

simulations[1][12][:node_index]
(12, 5.093379670776335)

to the realization of the noise (price, ω) passed to SDDP.parameterize:

simulations[1][12][:noise_term]
(4.889818354336574, 0.1)

Visualizing the policy

Finally, we can plot the policy to gain insight (although note that we terminated the training early, so we should run the re-train the policy for more iterations before making too many judgements).

plot = Plots.plot(
     SDDP.publication_plot(simulations; title = "x_stock.out") do data
         return data[:x_stock].out
     end,
@@ -180,4 +180,4 @@
         return data[:u_spot_sell]
     end;
     layout = (2, 2),
-)
Example block output

Next steps

+)Example block output

Next steps

diff --git a/previews/PR810/tutorial/example_newsvendor/038b0b3e.svg b/previews/PR810/tutorial/example_newsvendor/038b0b3e.svg deleted file mode 100644 index 81a8e9db1..000000000 --- a/previews/PR810/tutorial/example_newsvendor/038b0b3e.svg +++ /dev/null @@ -1,37 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR810/tutorial/example_newsvendor/41e9a93f.svg b/previews/PR810/tutorial/example_newsvendor/41e9a93f.svg new file mode 100644 index 000000000..7333d5e9f --- /dev/null +++ b/previews/PR810/tutorial/example_newsvendor/41e9a93f.svg @@ -0,0 +1,37 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_newsvendor/826cb904.svg b/previews/PR810/tutorial/example_newsvendor/826cb904.svg new file mode 100644 index 000000000..70b376037 --- /dev/null +++ b/previews/PR810/tutorial/example_newsvendor/826cb904.svg @@ -0,0 +1,88 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_newsvendor/955999b3.svg b/previews/PR810/tutorial/example_newsvendor/955999b3.svg deleted file mode 100644 index ea8cacf89..000000000 --- a/previews/PR810/tutorial/example_newsvendor/955999b3.svg +++ /dev/null @@ -1,96 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR810/tutorial/example_newsvendor/index.html b/previews/PR810/tutorial/example_newsvendor/index.html index fa2032eb0..3dcdb0d58 100644 --- a/previews/PR810/tutorial/example_newsvendor/index.html +++ b/previews/PR810/tutorial/example_newsvendor/index.html @@ -15,7 +15,7 @@ d = sort!(rand(D, N)); Ω = 1:N P = fill(1 / N, N); -StatsPlots.histogram(d; bins = 20, label = "", xlabel = "Demand")Example block output

Kelley's cutting plane algorithm

Kelley's cutting plane algorithm is an iterative method for maximizing concave functions. Given a concave function $f(x)$, Kelley's constructs an outer-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points $k = 1,\ldots,K$:

\[\begin{aligned} +StatsPlots.histogram(d; bins = 20, label = "", xlabel = "Demand")Example block output

Kelley's cutting plane algorithm

Kelley's cutting plane algorithm is an iterative method for maximizing concave functions. Given a concave function $f(x)$, Kelley's constructs an outer-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points $k = 1,\ldots,K$:

\[\begin{aligned} f^K = \max\limits_{\theta \in \mathbb{R}, x \in \mathbb{R}^N} \;\; & \theta\\ & \theta \le f(x_k) + \nabla f(x_k)^\top (x - x_k),\quad k=1,\ldots,K\\ & \theta \le M, @@ -168,55 +168,55 @@ println(" Added cut: $c") end

Solving iteration k = 1
   xᵏ = -0.0
-  V̅ = 1214.775985522808
+  V̅ = 1183.4099272437586
   V̲ = 0.0
   Added cut: -4.99999999999999 x_out + θ ≤ 0
 Solving iteration k = 2
-  xᵏ = 242.95519710456207
-  V̅ = 728.8655913136838
-  V̲ = 516.1343873021858
-  Added cut: 0.10000000000000007 x_out + θ ≤ 1026.3403012217661
+  xᵏ = 236.6819854487522
+  V̅ = 710.0459563462542
+  V̲ = 514.2764276921151
+  Added cut: 0.10000000000000007 x_out + θ ≤ 1011.3085971344942
 Solving iteration k = 3
-  xᵏ = 201.24319631799375
-  V̅ = 603.7295889539793
-  V̲ = 560.2095409217317
-  Added cut: -2.4499999999999993 x_out + θ ≤ 469.6501025786345
+  xᵏ = 198.29580335970513
+  V̅ = 594.8874100791135
+  V̲ = 554.381496171719
+  Added cut: -2.602999999999999 x_out + θ ≤ 434.8091267458148
 Solving iteration k = 4
-  xᵏ = 218.30988182083604
-  V̅ = 567.8895493980106
-  V̲ = 554.5996673199879
-  Added cut: -1.1750000000000005 x_out + θ ≤ 734.7053198221789
+  xᵏ = 213.28134309607097
+  V̅ = 563.4177766327452
+  V̲ = 552.1519418105354
+  Added cut: -1.1240000000000003 x_out + θ ≤ 738.9863983626929
 Solving iteration k = 5
-  xᵏ = 207.88644489689776
-  V̅ = 563.1990027822385
-  V̲ = 560.4666533872098
-  Added cut: -1.7360000000000009 x_out + θ ≤ 615.348674839989
+  xᵏ = 205.6641457855837
+  V̅ = 558.8246066545216
+  V̲ = 556.3188725894772
+  Added cut: -1.940000000000001 x_out + θ ≤ 568.6587213366125
 Solving iteration k = 6
-  xᵏ = 204.05962501590298
-  V̅ = 561.4769338357908
-  V̲ = 560.8548463285708
-  Added cut: -2.093000000000001 x_out + θ ≤ 541.877301202093
+  xᵏ = 201.88475805550232
+  V̅ = 556.5456358532826
+  V̲ = 555.8917352574315
+  Added cut: -2.297 x_out + θ ≤ 495.9319621149459
 Solving iteration k = 7
-  xᵏ = 205.80216705292978
-  V̅ = 561.0169027380157
-  V̲ = 560.8948275679345
-  Added cut: -1.8890000000000011 x_out + θ ≤ 583.73886811081
+  xᵏ = 203.71641238562103
+  V̅ = 556.4357365934754
+  V̲ = 556.2616638626839
+  Added cut: -2.1950000000000007 x_out + θ ≤ 516.5369634474899
 Solving iteration k = 8
-  xᵏ = 205.2037593564549
-  V̅ = 560.9612508222434
-  V̲ = 560.9192588302107
-  Added cut: -1.9910000000000012 x_out + θ ≤ 562.7660926644191
+  xᵏ = 204.39905054557917
+  V̅ = 556.394778303878
+  V̲ = 556.364563648315
+  Added cut: -2.042000000000001 x_out + θ ≤ 547.7798035254027
 Solving iteration k = 9
-  xᵏ = 204.79207316006435
-  V̅ = 560.9229640059791
-  V̲ = 560.9167550638547
-  Added cut: -2.042000000000001 x_out + θ ≤ 552.3154879911298
+  xᵏ = 204.69527265891713
+  V̅ = 556.3770049770777
+  V̲ = 556.3712005785063
+  Added cut: -1.9910000000000012 x_out + θ ≤ 558.2134580324379
 Solving iteration k = 10
-  xᵏ = 204.91381712330957
-  V̅ = 560.9218683103096
-  V̲ = 560.9218683103073
+  xᵏ = 204.5814609222857
+  V̅ = 556.3722248841389
+  V̲ = 556.3722248841384
 Terminating with near-optimal solution

To get the first-stage solution, we do:

optimize!(model)
-xᵏ = value(x_out)
204.91381712330957

To compute a second-stage solution, we do:

solve_second_stage(xᵏ, 170.0)
(V = 846.508618287669, λ = -0.1, x = 34.91381712330957, u = 170.0)

Policy Graph

Now let's see how we can formulate and train a policy for the two-stage newsvendor problem using SDDP.jl. Under the hood, SDDP.jl implements the exact algorithm that we just wrote by hand.

model = SDDP.LinearPolicyGraph(;
+xᵏ = value(x_out)
204.5814609222857

To compute a second-stage solution, we do:

solve_second_stage(xᵏ, 170.0)
(V = 846.5418539077714, λ = -0.1, x = 34.581460922285686, u = 170.0)

Policy Graph

Now let's see how we can formulate and train a policy for the two-stage newsvendor problem using SDDP.jl. Under the hood, SDDP.jl implements the exact algorithm that we just wrote by hand.

model = SDDP.LinearPolicyGraph(;
     stages = 2,
     sense = :Max,
     upper_bound = 5 * maximum(d),  # The `M` in θ <= M
@@ -266,87 +266,87 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   0.000000e+00  7.288656e+02  6.317139e-03       103   1
-         2   6.329607e+02  6.037296e+02  2.274394e-02       406   1
-         3   5.685322e+02  5.678895e+02  2.728415e-02       509   1
-         4   5.056097e+02  5.631990e+02  3.185511e-02       612   1
-         5   6.236593e+02  5.614769e+02  3.642607e-02       715   1
-         6   6.121789e+02  5.610169e+02  4.096293e-02       818   1
-         7   4.662857e+02  5.609613e+02  4.544497e-02       921   1
-         8   5.999526e+02  5.609230e+02  5.004311e-02      1024   1
-         9   6.143762e+02  5.609219e+02  5.454993e-02      1127   1
-        10   5.006445e+02  5.609219e+02  5.924010e-02      1230   1
-        11   4.564595e+02  5.609219e+02  6.395102e-02      1333   1
-        12   5.025865e+02  5.609219e+02  6.868196e-02      1436   1
-        13   5.290270e+02  5.609219e+02  7.337999e-02      1539   1
-        14   6.147415e+02  5.609219e+02  7.820606e-02      1642   1
-        15   6.147415e+02  5.609219e+02  8.290410e-02      1745   1
-        16   6.147415e+02  5.609219e+02  8.758092e-02      1848   1
-        17   6.147415e+02  5.609219e+02  9.228015e-02      1951   1
-        18   6.147415e+02  5.609219e+02  9.696007e-02      2054   1
-        19   6.147415e+02  5.609219e+02  1.017129e-01      2157   1
-        20   3.940125e+02  5.609219e+02  1.064010e-01      2260   1
-        21   4.621623e+02  5.609219e+02  1.256771e-01      2563   1
-        22   4.906642e+02  5.609219e+02  1.304090e-01      2666   1
-        23   6.134997e+02  5.609219e+02  1.352000e-01      2769   1
-        24   3.803713e+02  5.609219e+02  1.398981e-01      2872   1
-        25   4.621623e+02  5.609219e+02  1.446111e-01      2975   1
-        26   6.147415e+02  5.609219e+02  1.493549e-01      3078   1
-        27   5.871833e+02  5.609219e+02  1.540511e-01      3181   1
-        28   6.147415e+02  5.609219e+02  1.588180e-01      3284   1
-        29   6.147415e+02  5.609219e+02  1.635649e-01      3387   1
-        30   6.147415e+02  5.609219e+02  1.683011e-01      3490   1
-        31   6.147415e+02  5.609219e+02  1.730430e-01      3593   1
-        32   6.147415e+02  5.609219e+02  1.778190e-01      3696   1
-        33   5.346051e+02  5.609219e+02  1.825991e-01      3799   1
-        34   6.134997e+02  5.609219e+02  1.874511e-01      3902   1
-        35   6.147415e+02  5.609219e+02  1.922271e-01      4005   1
-        36   6.147415e+02  5.609219e+02  1.971161e-01      4108   1
-        37   6.049568e+02  5.609219e+02  2.018850e-01      4211   1
-        38   3.957895e+02  5.609219e+02  2.067280e-01      4314   1
-        39   4.592685e+02  5.609219e+02  2.115819e-01      4417   1
-        40   6.147415e+02  5.609219e+02  2.163730e-01      4520   1
+         1   0.000000e+00  7.100460e+02  6.299019e-03       103   1
+         2   5.646406e+02  5.948874e+02  2.312112e-02       406   1
+         3   4.476627e+02  5.634178e+02  2.772212e-02       509   1
+         4   6.398440e+02  5.588246e+02  3.233790e-02       612   1
+         5   5.204798e+02  5.565456e+02  3.694201e-02       715   1
+         6   5.263013e+02  5.564357e+02  4.145813e-02       818   1
+         7   6.111492e+02  5.563948e+02  4.591513e-02       921   1
+         8   6.130155e+02  5.563770e+02  5.047011e-02      1024   1
+         9   6.140858e+02  5.563722e+02  5.497694e-02      1127   1
+        10   5.927906e+02  5.563722e+02  5.946493e-02      1230   1
+        11   5.102792e+02  5.563722e+02  6.396890e-02      1333   1
+        12   6.137444e+02  5.563722e+02  6.880713e-02      1436   1
+        13   4.954641e+02  5.563722e+02  7.348204e-02      1539   1
+        14   6.137444e+02  5.563722e+02  7.818413e-02      1642   1
+        15   5.150817e+02  5.563722e+02  8.286810e-02      1745   1
+        16   4.964032e+02  5.563722e+02  8.755493e-02      1848   1
+        17   6.011079e+02  5.563722e+02  9.227395e-02      1951   1
+        18   4.753277e+02  5.563722e+02  9.699798e-02      2054   1
+        19   6.137444e+02  5.563722e+02  1.017599e-01      2157   1
+        20   6.137444e+02  5.563722e+02  1.064529e-01      2260   1
+        21   5.861319e+02  5.563722e+02  1.258860e-01      2563   1
+        22   6.137444e+02  5.563722e+02  1.306260e-01      2666   1
+        23   6.137444e+02  5.563722e+02  1.353610e-01      2769   1
+        24   6.137444e+02  5.563722e+02  1.400399e-01      2872   1
+        25   6.137444e+02  5.563722e+02  2.465379e-01      2975   1
+        26   6.137444e+02  5.563722e+02  2.515249e-01      3078   1
+        27   5.508483e+02  5.563722e+02  2.564220e-01      3181   1
+        28   4.036025e+02  5.563722e+02  2.613389e-01      3284   1
+        29   5.180160e+02  5.563722e+02  2.662160e-01      3387   1
+        30   5.872052e+02  5.563722e+02  2.712021e-01      3490   1
+        31   5.036519e+02  5.563722e+02  2.761960e-01      3593   1
+        32   6.137444e+02  5.563722e+02  2.812769e-01      3696   1
+        33   6.137444e+02  5.563722e+02  2.862799e-01      3799   1
+        34   4.753277e+02  5.563722e+02  2.913539e-01      3902   1
+        35   6.137444e+02  5.563722e+02  2.964051e-01      4005   1
+        36   5.227535e+02  5.563722e+02  3.013721e-01      4108   1
+        37   4.626982e+02  5.563722e+02  3.063509e-01      4211   1
+        38   6.137444e+02  5.563722e+02  3.113050e-01      4314   1
+        39   6.137444e+02  5.563722e+02  3.162961e-01      4417   1
+        40   6.137444e+02  5.563722e+02  3.213129e-01      4520   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 2.163730e-01
+total time (s) : 3.213129e-01
 total solves   : 4520
-best bound     :  5.609219e+02
-simulation ci  :  5.457892e+02 ± 3.603988e+01
+best bound     :  5.563722e+02
+simulation ci  :  5.510009e+02 ± 3.348500e+01
 numeric issues : 0
--------------------------------------------------------------------

One way to query the optimal policy is with SDDP.DecisionRule:

first_stage_rule = SDDP.DecisionRule(model; node = 1)
A decision rule for node 1
solution_1 = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))
(stage_objective = -409.8276342466294, outgoing_state = Dict(:x => 204.9138171233147), controls = Dict{Any, Any}())

Here's the second stage:

second_stage_rule = SDDP.DecisionRule(model; node = 2)
+-------------------------------------------------------------------

One way to query the optimal policy is with SDDP.DecisionRule:

first_stage_rule = SDDP.DecisionRule(model; node = 1)
A decision rule for node 1
solution_1 = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))
(stage_objective = -409.16292184449867, outgoing_state = Dict(:x => 204.58146092224933), controls = Dict{Any, Any}())

Here's the second stage:

second_stage_rule = SDDP.DecisionRule(model; node = 2)
 solution = SDDP.evaluate(
     second_stage_rule;
     incoming_state = Dict(:x => solution_1.outgoing_state[:x]),
     noise = 170.0,  # A value of d[ω], can be out-of-sample.
     controls_to_record = [:u_sell],
-)
(stage_objective = 846.5086182876686, outgoing_state = Dict(:x => 34.913817123314686), controls = Dict(:u_sell => 170.0))

Simulation

Querying the decision rules is tedious. It's often more useful to simulate the policy:

simulations = SDDP.simulate(
+)
(stage_objective = 846.541853907775, outgoing_state = Dict(:x => 34.581460922249335), controls = Dict(:u_sell => 170.0))

Simulation

Querying the decision rules is tedious. It's often more useful to simulate the policy:

simulations = SDDP.simulate(
     model,
     10,  #= number of replications =#
     [:x, :u_sell, :u_make];  #= variables to record =#
     skip_undefined_variables = true,
 );

simulations is a vector with 10 elements

length(simulations)
10

and each element is a vector with two elements (one for each stage)

length(simulations[1])
2

The first stage contains:

simulations[1][1]
Dict{Symbol, Any} with 9 entries:
-  :u_make          => 204.914
-  :bellman_term    => 970.75
+  :u_make          => 204.581
+  :bellman_term    => 965.535
   :noise_term      => nothing
   :node_index      => 1
-  :stage_objective => -409.828
+  :stage_objective => -409.163
   :objective_state => nothing
   :u_sell          => NaN
   :belief          => Dict(1=>1.0)
-  :x               => State{Float64}(0.0, 204.914)

The second stage contains:

simulations[1][2]
Dict{Symbol, Any} with 9 entries:
+  :x               => State{Float64}(0.0, 204.581)

The second stage contains:

simulations[1][2]
Dict{Symbol, Any} with 9 entries:
   :u_make          => NaN
   :bellman_term    => 0.0
-  :noise_term      => 168.241
+  :noise_term      => 186.74
   :node_index      => 2
-  :stage_objective => 837.537
+  :stage_objective => 931.916
   :objective_state => nothing
-  :u_sell          => 168.241
+  :u_sell          => 186.74
   :belief          => Dict(2=>1.0)
-  :x               => State{Float64}(204.914, 36.673)

We can compute aggregated statistics across the simulations:

objectives = map(simulations) do simulation
+  :x               => State{Float64}(204.581, 17.8414)

We can compute aggregated statistics across the simulations:

objectives = map(simulations) do simulation
     return sum(data[:stage_objective] for data in simulation)
 end
 μ, t = SDDP.confidence_interval(objectives)
-println("Simulation ci : $μ ± $t")
Simulation ci : 560.7584157098962 ± 50.83140120640501

Risk aversion revisited

SDDP.jl contains a number of risk measures. One example is:

0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()
A convex combination of 0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()

You can construct a risk-averse policy by passing a risk measure to the risk_measure keyword argument of SDDP.train.

We can explore how the optimal decision changes with risk by creating a function:

function solve_newsvendor(risk_measure::SDDP.AbstractRiskMeasure)
+println("Simulation ci : $μ ± $t")
Simulation ci : 559.3951925983896 ± 26.604541867286123

Risk aversion revisited

SDDP.jl contains a number of risk measures. One example is:

0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()
A convex combination of 0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()

You can construct a risk-averse policy by passing a risk measure to the risk_measure keyword argument of SDDP.train.

We can explore how the optimal decision changes with risk by creating a function:

function solve_newsvendor(risk_measure::SDDP.AbstractRiskMeasure)
     model = SDDP.LinearPolicyGraph(;
         stages = 2,
         sense = :Max,
@@ -372,7 +372,7 @@
     first_stage_rule = SDDP.DecisionRule(model; node = 1)
     solution = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))
     return solution.outgoing_state[:x]
-end
solve_newsvendor (generic function with 1 method)

Now we can see how many units a decision maker would order using CVaR:

solve_newsvendor(SDDP.CVaR(0.4))
182.92265592428507

as well as a decision-maker who cares only about the worst-case outcome:

solve_newsvendor(SDDP.WorstCase())
158.95888320101494

In general, the decision-maker will be somewhere between the two extremes. The SDDP.Entropic risk measure is a risk measure that has a single parameter that lets us explore the space of policies between the two extremes. When the parameter is small, the measure acts like SDDP.Expectation, and when it is large, it acts like SDDP.WorstCase.

Here is what we get if we solve our problem multiple times for different values of the risk aversion parameter $\gamma$:

Γ = [10^i for i in -4:0.5:1]
+end
solve_newsvendor (generic function with 1 method)

Now we can see how many units a decision maker would order using CVaR:

solve_newsvendor(SDDP.CVaR(0.4))
184.29416960495013

as well as a decision-maker who cares only about the worst-case outcome:

solve_newsvendor(SDDP.WorstCase())
161.0594977945115

In general, the decision-maker will be somewhere between the two extremes. The SDDP.Entropic risk measure is a risk measure that has a single parameter that lets us explore the space of policies between the two extremes. When the parameter is small, the measure acts like SDDP.Expectation, and when it is large, it acts like SDDP.WorstCase.

Here is what we get if we solve our problem multiple times for different values of the risk aversion parameter $\gamma$:

Γ = [10^i for i in -4:0.5:1]
 buy = [solve_newsvendor(SDDP.Entropic(γ)) for γ in Γ]
 Plots.plot(
     Γ,
@@ -381,4 +381,4 @@
     xlabel = "Risk aversion parameter γ",
     ylabel = "Number of pies to make",
     legend = false,
-)
Example block output

Things to try

There are a number of things you can try next:

+)Example block output

Things to try

There are a number of things you can try next:

diff --git a/previews/PR810/tutorial/example_reservoir/3e8ca7db.svg b/previews/PR810/tutorial/example_reservoir/228c2edf.svg similarity index 85% rename from previews/PR810/tutorial/example_reservoir/3e8ca7db.svg rename to previews/PR810/tutorial/example_reservoir/228c2edf.svg index c05ebc237..59231c6eb 100644 --- a/previews/PR810/tutorial/example_reservoir/3e8ca7db.svg +++ b/previews/PR810/tutorial/example_reservoir/228c2edf.svg @@ -1,46 +1,46 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_reservoir/2f8469c5.svg b/previews/PR810/tutorial/example_reservoir/2ce02528.svg similarity index 85% rename from previews/PR810/tutorial/example_reservoir/2f8469c5.svg rename to previews/PR810/tutorial/example_reservoir/2ce02528.svg index cdb5a50e2..28ce27789 100644 --- a/previews/PR810/tutorial/example_reservoir/2f8469c5.svg +++ b/previews/PR810/tutorial/example_reservoir/2ce02528.svg @@ -1,46 +1,46 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_reservoir/ec3b2cff.svg b/previews/PR810/tutorial/example_reservoir/679b6d0a.svg similarity index 76% rename from previews/PR810/tutorial/example_reservoir/ec3b2cff.svg rename to previews/PR810/tutorial/example_reservoir/679b6d0a.svg index c76c96eaf..760248bf6 100644 --- a/previews/PR810/tutorial/example_reservoir/ec3b2cff.svg +++ b/previews/PR810/tutorial/example_reservoir/679b6d0a.svg @@ -1,148 +1,148 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_reservoir/024c3bed.svg b/previews/PR810/tutorial/example_reservoir/70fda81e.svg similarity index 85% rename from previews/PR810/tutorial/example_reservoir/024c3bed.svg rename to previews/PR810/tutorial/example_reservoir/70fda81e.svg index b951a7f22..288503bfd 100644 --- a/previews/PR810/tutorial/example_reservoir/024c3bed.svg +++ b/previews/PR810/tutorial/example_reservoir/70fda81e.svg @@ -1,52 +1,52 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_reservoir/e7d306a2.svg b/previews/PR810/tutorial/example_reservoir/7cf51f63.svg similarity index 84% rename from previews/PR810/tutorial/example_reservoir/e7d306a2.svg rename to previews/PR810/tutorial/example_reservoir/7cf51f63.svg index 50ea3bcbd..3843f1673 100644 --- a/previews/PR810/tutorial/example_reservoir/e7d306a2.svg +++ b/previews/PR810/tutorial/example_reservoir/7cf51f63.svg @@ -1,109 +1,109 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_reservoir/8aa4be7d.svg b/previews/PR810/tutorial/example_reservoir/8aa4be7d.svg new file mode 100644 index 000000000..018564641 --- /dev/null +++ b/previews/PR810/tutorial/example_reservoir/8aa4be7d.svg @@ -0,0 +1,86 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_reservoir/8d414c17.svg b/previews/PR810/tutorial/example_reservoir/9d19c527.svg similarity index 85% rename from previews/PR810/tutorial/example_reservoir/8d414c17.svg rename to previews/PR810/tutorial/example_reservoir/9d19c527.svg index 30e311e2d..d598e9614 100644 --- a/previews/PR810/tutorial/example_reservoir/8d414c17.svg +++ b/previews/PR810/tutorial/example_reservoir/9d19c527.svg @@ -1,52 +1,52 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/example_reservoir/ab1d655a.svg b/previews/PR810/tutorial/example_reservoir/ab1d655a.svg deleted file mode 100644 index 3f290c752..000000000 --- a/previews/PR810/tutorial/example_reservoir/ab1d655a.svg +++ /dev/null @@ -1,86 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR810/tutorial/example_reservoir/index.html b/previews/PR810/tutorial/example_reservoir/index.html index a87fd96bb..091e6d9c3 100644 --- a/previews/PR810/tutorial/example_reservoir/index.html +++ b/previews/PR810/tutorial/example_reservoir/index.html @@ -9,7 +9,7 @@ import DataFrames import HiGHS import Plots

Data

First, we need some data for the problem. For this tutorial, we'll write CSV files to a temporary directory from Julia. If you have an existing file, you could change the filename to point to that instead.

dir = mktempdir()
-filename = joinpath(dir, "example_reservoir.csv")
"/tmp/jl_BA2zco/example_reservoir.csv"

Here is the data

csv_data = """
+filename = joinpath(dir, "example_reservoir.csv")
"/tmp/jl_eLbGPO/example_reservoir.csv"

Here is the data

csv_data = """
 week,inflow,demand,cost
 1,3,7,10.2\n2,2,7.1,10.4\n3,3,7.2,10.6\n4,2,7.3,10.9\n5,3,7.4,11.2\n
 6,2,7.6,11.5\n7,3,7.8,11.9\n8,2,8.1,12.3\n9,3,8.3,12.7\n10,2,8.6,13.1\n
@@ -29,7 +29,7 @@
     Plots.plot(data[!, :cost]; ylabel = "Cost", xlabel = "Week");
     layout = (3, 1),
     legend = false,
-)
Example block output

The number of weeks will be useful later:

T = size(data, 1)
52

Deterministic JuMP model

To start, we construct a deterministic model in pure JuMP.

Create a JuMP model, using HiGHS as the optimizer:

model = Model(HiGHS.Optimizer)
+)
Example block output

The number of weeks will be useful later:

T = size(data, 1)
52

Deterministic JuMP model

To start, we construct a deterministic model in pure JuMP.

Create a JuMP model, using HiGHS as the optimizer:

model = Model(HiGHS.Optimizer)
 set_silent(model)

x_storage[t]: the amount of water in the reservoir at the start of stage t:

reservoir_max = 320.0
 @variable(model, 0 <= x_storage[1:T+1] <= reservoir_max)
53-element Vector{VariableRef}:
  x_storage[1]
@@ -197,13 +197,13 @@
   Dual objective value : 6.82910e+02
 
 * Work counters
-  Solve time (sec)   : 8.62837e-04
+  Solve time (sec)   : 8.56161e-04
   Simplex iterations : 53
   Barrier iterations : 0
   Node count         : -1
 

The total cost is:

objective_value(model)
682.9099999999999

Here's a plot of demand and generation:

Plots.plot(data[!, :demand]; label = "Demand", xlabel = "Week")
 Plots.plot!(value.(u_thermal); label = "Thermal")
-Plots.plot!(value.(u_flow); label = "Hydro")
Example block output

And here's the storage over time:

Plots.plot(value.(x_storage); label = "Storage", xlabel = "Week")
Example block output

Deterministic SDDP model

For the next step, we show how to decompose our JuMP model into SDDP.jl. It should obtain the same solution.

model = SDDP.LinearPolicyGraph(;
+Plots.plot!(value.(u_flow); label = "Hydro")
Example block output

And here's the storage over time:

Plots.plot(value.(x_storage); label = "Storage", xlabel = "Week")
Example block output

Deterministic SDDP model

For the next step, we show how to decompose our JuMP model into SDDP.jl. It should obtain the same solution.

model = SDDP.LinearPolicyGraph(;
     stages = T,
     sense = :Min,
     lower_bound = 0.0,
@@ -252,11 +252,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   1.079600e+03  3.157700e+02  4.199100e-02       104   1
-        10   6.829100e+02  6.829100e+02  1.441059e-01      1040   1
+         1   1.079600e+03  3.157700e+02  4.440188e-02       104   1
+        10   6.829100e+02  6.829100e+02  1.417639e-01      1040   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 1.441059e-01
+total time (s) : 1.417639e-01
 total solves   : 1040
 best bound     :  6.829100e+02
 simulation ci  :  7.289889e+02 ± 7.726064e+01
@@ -279,9 +279,9 @@
 
 Plots.plot(data[!, :demand]; label = "Demand", xlabel = "Week")
 Plots.plot!(r_sim; label = "Thermal")
-Plots.plot!(u_sim; label = "Hydro")
Example block output

Perfect. That's the same as we got before.

Now let's look at x_storage. This is a little more complicated, because we need to grab the outgoing value of the state variable in each stage:

x_sim = [sim[:x_storage].out for sim in simulations[1]]
+Plots.plot!(u_sim; label = "Hydro")
Example block output

Perfect. That's the same as we got before.

Now let's look at x_storage. This is a little more complicated, because we need to grab the outgoing value of the state variable in each stage:

x_sim = [sim[:x_storage].out for sim in simulations[1]]
 
-Plots.plot(x_sim; label = "Storage", xlabel = "Week")
Example block output

Stochastic SDDP model

Now we add some randomness to our model. In each stage, we assume that the inflow could be: 2 units lower, with 30% probability; the same as before, with 40% probability; or 5 units higher, with 30% probability.

model = SDDP.LinearPolicyGraph(;
+Plots.plot(x_sim; label = "Storage", xlabel = "Week")
Example block output

Stochastic SDDP model

Now we add some randomness to our model. In each stage, we assume that the inflow could be: 2 units lower, with 30% probability; the same as before, with 40% probability; or 5 units higher, with 30% probability.

model = SDDP.LinearPolicyGraph(;
     stages = T,
     sense = :Min,
     lower_bound = 0.0,
@@ -335,23 +335,23 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   4.759100e+02  1.200258e+02  4.662895e-02       208   1
-        46   7.062877e+01  2.475505e+02  1.058435e+00      9568   1
-        84   1.106338e+02  2.631204e+02  2.063247e+00     17472   1
-       100   3.928304e+02  2.672125e+02  2.518251e+00     20800   1
+         1   0.000000e+00  0.000000e+00  4.445004e-02       208   1
+        47   1.393051e+02  2.492638e+02  1.057663e+00      9776   1
+        86   1.564354e+02  2.659338e+02  2.084841e+00     17888   1
+       100   3.631193e+02  2.683719e+02  2.479496e+00     20800   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 2.518251e+00
+total time (s) : 2.479496e+00
 total solves   : 20800
-best bound     :  2.672125e+02
-simulation ci  :  3.123973e+02 ± 4.765300e+01
+best bound     :  2.683719e+02
+simulation ci  :  2.733888e+02 ± 3.837418e+01
 numeric issues : 0
 -------------------------------------------------------------------

Now simulate the policy. This time we do 100 replications because the policy is now stochastic instead of deterministic:

simulations =
     SDDP.simulate(model, 100, [:x_storage, :u_flow, :u_thermal, :ω_inflow]);

And let's plot the use of thermal generation in each replication:

plot = Plots.plot(data[!, :demand]; label = "Demand", xlabel = "Week")
 for simulation in simulations
     Plots.plot!(plot, [sim[:u_thermal] for sim in simulation]; label = "")
 end
-plot
Example block output

Viewing an interpreting static plots like this is difficult, particularly as the number of simulations grows. SDDP.jl includes an interactive SpaghettiPlot that makes things easier:

plot = SDDP.SpaghettiPlot(simulations)
+plot
Example block output

Viewing an interpreting static plots like this is difficult, particularly as the number of simulations grows. SDDP.jl includes an interactive SpaghettiPlot that makes things easier:

plot = SDDP.SpaghettiPlot(simulations)
 SDDP.add_spaghetti(plot; title = "Storage") do sim
     return sim[:x_storage].out
 end
@@ -427,39 +427,42 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   6.294630e+04  4.219840e+04  2.404258e-01      2707   1
-         5   1.552348e+05  9.082613e+04  1.774935e+00     19567   1
-        12   2.659207e+05  9.302003e+04  4.033805e+00     40596   1
-        15   1.484517e+05  9.319342e+04  5.628028e+00     52877   1
-        18   9.571809e+03  9.323922e+04  6.634310e+00     59958   1
-        20   3.876251e+05  9.326436e+04  9.501914e+00     78476   1
-        25   1.837013e+05  9.334504e+04  1.454321e+01    108443   1
-        29   5.378171e+04  9.335848e+04  1.979666e+01    135495   1
-        37   2.330216e+05  9.337001e+04  2.659127e+01    166303   1
-        43   7.152926e+04  9.337441e+04  3.235910e+01    189825   1
-        51   1.426944e+05  9.337775e+04  3.750609e+01    209401   1
-        61   1.951056e+04  9.337986e+04  4.271820e+01    228151   1
-        68   1.357792e+05  9.338090e+04  4.798858e+01    243980   1
-        75   8.543836e+04  9.338263e+04  5.417524e+01    263761   1
-        79   1.440154e+05  9.338364e+04  6.035429e+01    282285   1
-        81   1.599147e+05  9.338378e+04  6.688132e+01    301011   1
-        85   6.785331e+04  9.338609e+04  7.196572e+01    315167   1
-        88   1.421650e+05  9.338729e+04  7.715207e+01    329320   1
-        92   7.093559e+04  9.338862e+04  8.297624e+01    344516   1
-        94   3.166021e+05  9.338889e+04  9.057555e+01    363450   1
-       100   1.084466e+05  9.339011e+04  9.695549e+01    379068   1
+         1   3.129477e+04  2.410097e+04  1.429400e-01      1459   1
+         7   3.912259e+04  8.832886e+04  1.330084e+00     15205   1
+        10   1.083430e+05  9.250045e+04  2.357529e+00     26238   1
+        13   2.588539e+05  9.329172e+04  5.359828e+00     45799   1
+        14   2.504203e+05  9.334514e+04  6.836051e+00     56618   1
+        16   1.205895e+05  9.334634e+04  7.912949e+00     63904   1
+        21   1.145414e+05  9.335654e+04  1.300891e+01     94079   1
+        30   1.712406e+05  9.337112e+04  1.924543e+01    126762   1
+        36   3.406886e+05  9.337325e+04  2.506985e+01    153612   1
+        47   4.332582e+04  9.337875e+04  3.027704e+01    175901   1
+        51   1.634254e+05  9.337981e+04  3.577433e+01    197545   1
+        53   3.974429e+05  9.338067e+04  4.172650e+01    218559   1
+        54   4.038175e+05  9.338101e+04  4.688607e+01    236034   1
+        61   1.649721e+05  9.338615e+04  5.283478e+01    254983   1
+        64   3.177687e+05  9.338634e+04  5.924092e+01    274544   1
+        66   1.436600e+05  9.338666e+04  6.546633e+01    292854   1
+        68   3.437550e+05  9.338708e+04  7.089886e+01    308252   1
+        71   2.662122e+05  9.338883e+04  7.811391e+01    327813   1
+        74   2.533959e+05  9.339006e+04  8.535554e+01    346542   1
+        79   1.620139e+05  9.339146e+04  9.251919e+01    364237   1
+        85   1.495574e+05  9.339233e+04  1.002059e+02    382559   1
+        91   1.701819e+05  9.339296e+04  1.052799e+02    394433   1
+        95   1.221699e+05  9.339330e+04  1.125305e+02    410461   1
+       100   3.531429e+04  9.339343e+04  1.179270e+02    422124   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 9.695549e+01
-total solves   : 379068
-best bound     :  9.339011e+04
-simulation ci  :  8.466519e+04 ± 1.533736e+04
+total time (s) : 1.179270e+02
+total solves   : 422124
+best bound     :  9.339343e+04
+simulation ci  :  9.498564e+04 ± 1.929349e+04
 numeric issues : 0
 -------------------------------------------------------------------

When we simulate now, each trajectory will be a different length, because each cycle has a 95% probability of continuing and a 5% probability of stopping.

simulations = SDDP.simulate(model, 3);
 length.(simulations)
3-element Vector{Int64}:
-  312
- 2080
-  312

We can simulate a fixed number of cycles by passing a sampling_scheme:

simulations = SDDP.simulate(
+  884
+ 2340
+  676

We can simulate a fixed number of cycles by passing a sampling_scheme:

simulations = SDDP.simulate(
     model,
     100,
     [:x_storage, :u_flow];
@@ -496,4 +499,4 @@
         return sim[:u_flow]
     end;
     layout = (2, 1),
-)
Example block output

Next steps

Our model is very basic. There are many aspects that we could improve:

  • Can you add a second reservoir to make a river chain?

  • Can you modify the problem and data to use proper units, including a conversion between the volume of water flowing through the turbine and the electrical power output?

+)Example block output

Next steps

Our model is very basic. There are many aspects that we could improve:

  • Can you add a second reservoir to make a river chain?

  • Can you modify the problem and data to use proper units, including a conversion between the volume of water flowing through the turbine and the electrical power output?

diff --git a/previews/PR810/tutorial/first_steps/index.html b/previews/PR810/tutorial/first_steps/index.html index 7025089cc..a2bd23528 100644 --- a/previews/PR810/tutorial/first_steps/index.html +++ b/previews/PR810/tutorial/first_steps/index.html @@ -228,14 +228,14 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.750000e+04 2.500000e+03 3.592968e-03 12 1 - 10 1.000000e+04 8.333333e+03 1.350904e-02 120 1 + 1 2.750000e+04 3.437500e+03 3.911018e-03 12 1 + 10 5.000000e+03 8.333333e+03 1.412487e-02 120 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.350904e-02 +total time (s) : 1.412487e-02 total solves : 120 best bound : 8.333333e+03 -simulation ci : 8.000000e+03 ± 2.400500e+03 +simulation ci : 8.031250e+03 ± 4.822873e+03 numeric issues : 0 -------------------------------------------------------------------

There's a lot going on in this printout! Let's break it down.

The first section, "problem," gives some problem statistics. In this example there are 3 nodes, 1 state variable, and 27 scenarios ($3^3$). We haven't solved this problem before so there are no existing cuts.

The "options" section lists some options we are using to solve the problem. For more information on the numerical stability report, read the Numerical stability report section.

The "subproblem structure" section also needs explaining. This looks at all of the nodes in the policy graph and reports the minimum and maximum number of variables and each constraint type in the corresponding subproblem. In this case each subproblem has 7 variables and various numbers of different constraint types. Note that the exact numbers may not correspond to the formulation as you wrote it, because SDDP.jl adds some extra variables for the cost-to-go function.

Then comes the iteration log, which is the main part of the printout. It has the following columns:

  • iteration: the SDDP iteration
  • simulation: the cost of the single forward pass simulation for that iteration. This value is stochastic and is not guaranteed to improve over time. However, it's useful to check that the units are reasonable, and that it is not deterministic if you intended for the problem to be stochastic, etc.
  • bound: this is a lower bound (upper if maximizing) for the value of the optimal policy. This bound should be monotonically improving (increasing if minimizing, decreasing if maximizing), but in some cases it can temporarily worsen due to cut selection, especially in the early iterations of the algorithm.
  • time (s): the total number of seconds spent solving so far
  • solves: the total number of subproblem solves to date. This can be very large!
  • pid: the ID of the processor used to solve that iteration. This should be 1 unless you are using parallel computation.

In addition, if the first character of a line is , then SDDP.jl experienced numerical issues during the solve, but successfully recovered.

The printout finishes with some summary statistics:

  • status: why did the solver stop?
  • total time (s), best bound, and total solves are the values from the last iteration of the solve.
  • simulation ci: a confidence interval that estimates the quality of the policy from the Simulation column.
  • numeric issues: the number of iterations that experienced numerical issues.
Warning

The simulation ci result can be misleading if you run a small number of iterations, or if the initial simulations are very bad. On a more technical note, it is an in-sample simulation, which may not reflect the true performance of the policy. See Obtaining bounds for more details.

Obtaining the decision rule

After training a policy, we can create a decision rule using SDDP.DecisionRule:

rule = SDDP.DecisionRule(model; node = 1)
A decision rule for node 1

Then, to evaluate the decision rule, we use SDDP.evaluate:

solution = SDDP.evaluate(
     rule;
@@ -254,31 +254,31 @@
 replication = 1
 stage = 2
 simulations[replication][stage]
Dict{Symbol, Any} with 10 entries:
-  :volume             => State{Float64}(200.0, 150.0)
+  :volume             => State{Float64}(200.0, 100.0)
   :hydro_spill        => 0.0
-  :bellman_term       => 0.0
-  :noise_term         => 100.0
+  :bellman_term       => 2500.0
+  :noise_term         => 0.0
   :node_index         => 2
-  :stage_objective    => 0.0
+  :stage_objective    => 5000.0
   :objective_state    => nothing
-  :thermal_generation => 0.0
-  :hydro_generation   => 150.0
+  :thermal_generation => 50.0
+  :hydro_generation   => 100.0
   :belief             => Dict(2=>1.0)

Ignore many of the entries for now; they will be relevant later.

One element of interest is :volume.

outgoing_volume = map(simulations[1]) do node
     return node[:volume].out
 end
3-element Vector{Float64}:
  200.0
- 150.0
+ 100.0
    0.0

Another is :thermal_generation.

thermal_generation = map(simulations[1]) do node
     return node[:thermal_generation]
 end
3-element Vector{Float64}:
- 100.0
-   0.0
-   0.0

Obtaining bounds

Because the optimal policy is stochastic, one common approach to quantify the quality of the policy is to construct a confidence interval for the expected cost by summing the stage objectives along each simulation.

objectives = map(simulations) do simulation
+ 150.0
+  50.0
+  50.0

Obtaining bounds

Because the optimal policy is stochastic, one common approach to quantify the quality of the policy is to construct a confidence interval for the expected cost by summing the stage objectives along each simulation.

objectives = map(simulations) do simulation
     return sum(stage[:stage_objective] for stage in simulation)
 end
 
 μ, ci = SDDP.confidence_interval(objectives)
-println("Confidence interval: ", μ, " ± ", ci)
Confidence interval: 8450.0 ± 855.1953879173764

This confidence interval is an estimate for an upper bound of the policy's quality. We can calculate the lower bound using SDDP.calculate_bound.

println("Lower bound: ", SDDP.calculate_bound(model))
Lower bound: 8333.333333333332
Tip

The upper- and lower-bounds are reversed if maximizing, i.e., SDDP.calculate_bound. returns an upper bound.

Custom recorders

In addition to simulating the primal values of variables, we can also pass custom recorder functions. Each of these functions takes one argument, the JuMP subproblem corresponding to each node. This function gets called after we have solved each node as we traverse the policy graph in the simulation.

For example, the dual of the demand constraint (which we named demand_constraint) corresponds to the price we should charge for electricity, since it represents the cost of each additional unit of demand. To calculate this, we can go:

simulations = SDDP.simulate(
+println("Confidence interval: ", μ, " ± ", ci)
Confidence interval: 8425.0 ± 901.7823596868726

This confidence interval is an estimate for an upper bound of the policy's quality. We can calculate the lower bound using SDDP.calculate_bound.

println("Lower bound: ", SDDP.calculate_bound(model))
Lower bound: 8333.333333333332
Tip

The upper- and lower-bounds are reversed if maximizing, i.e., SDDP.calculate_bound. returns an upper bound.

Custom recorders

In addition to simulating the primal values of variables, we can also pass custom recorder functions. Each of these functions takes one argument, the JuMP subproblem corresponding to each node. This function gets called after we have solved each node as we traverse the policy graph in the simulation.

For example, the dual of the demand constraint (which we named demand_constraint) corresponds to the price we should charge for electricity, since it represents the cost of each additional unit of demand. To calculate this, we can go:

simulations = SDDP.simulate(
     model,
     1;  ## Perform a single simulation
     custom_recorders = Dict{Symbol,Function}(
@@ -290,5 +290,5 @@
     return node[:price]
 end
3-element Vector{Float64}:
   50.0
-  50.0
- 150.0

Extracting the marginal water values

Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space.

Note

By "value function" we mean $\mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)]$, not the function $V_i(x, \omega)$.

First, we construct a value function from the first subproblem:

V = SDDP.ValueFunction(model; node = 1)
A value function for node 1

Then we can evaluate V at a point:

cost, price = SDDP.evaluate(V, Dict("volume" => 10))
(21499.999999999996, Dict(:volume => -99.99999999999999))

This returns the cost-to-go (cost), and the gradient of the cost-to-go function with respect to each state variable. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.

+ 100.0 + 150.0

Extracting the marginal water values

Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space.

Note

By "value function" we mean $\mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)]$, not the function $V_i(x, \omega)$.

First, we construct a value function from the first subproblem:

V = SDDP.ValueFunction(model; node = 1)
A value function for node 1

Then we can evaluate V at a point:

cost, price = SDDP.evaluate(V, Dict("volume" => 10))
(21499.999999999993, Dict(:volume => -99.99999999999999))

This returns the cost-to-go (cost), and the gradient of the cost-to-go function with respect to each state variable. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.

diff --git a/previews/PR810/tutorial/inventory/64851468.svg b/previews/PR810/tutorial/inventory/60ce68dd.svg similarity index 84% rename from previews/PR810/tutorial/inventory/64851468.svg rename to previews/PR810/tutorial/inventory/60ce68dd.svg index f64bb2ac9..31120006d 100644 --- a/previews/PR810/tutorial/inventory/64851468.svg +++ b/previews/PR810/tutorial/inventory/60ce68dd.svg @@ -1,57 +1,57 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/inventory/e6aeb0c9.svg b/previews/PR810/tutorial/inventory/e23b5f9f.svg similarity index 84% rename from previews/PR810/tutorial/inventory/e6aeb0c9.svg rename to previews/PR810/tutorial/inventory/e23b5f9f.svg index 5aadac70e..ab2b1dd99 100644 --- a/previews/PR810/tutorial/inventory/e6aeb0c9.svg +++ b/previews/PR810/tutorial/inventory/e23b5f9f.svg @@ -1,51 +1,51 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/inventory/index.html b/previews/PR810/tutorial/inventory/index.html index 0bce2d81a..dcb9a92ac 100644 --- a/previews/PR810/tutorial/inventory/index.html +++ b/previews/PR810/tutorial/inventory/index.html @@ -72,27 +72,23 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 4.555632e+05 4.573582e+04 1.892209e-02 212 1 - 56 1.154817e+05 1.443323e+05 1.029831e+00 15172 1 - 107 1.529994e+05 1.443373e+05 2.034419e+00 28184 1 - 169 1.632711e+05 1.443373e+05 3.050451e+00 41328 1 - 216 1.761974e+05 1.443373e+05 4.057784e+00 52392 1 - 265 1.039868e+05 1.443373e+05 5.074113e+00 62780 1 - 305 6.508158e+04 1.443373e+05 6.094956e+00 72360 1 - 347 1.746395e+05 1.443373e+05 7.099320e+00 81264 1 - 386 1.116079e+05 1.443373e+05 8.183207e+00 90632 1 - 429 2.027237e+05 1.443374e+05 9.197365e+00 99748 1 - 485 1.255026e+05 1.443374e+05 1.050865e+01 111620 1 + 1 3.555632e+05 4.573582e+04 1.939392e-02 212 1 + 55 1.726543e+05 1.443370e+05 1.076247e+00 14960 1 + 110 1.879026e+05 1.443374e+05 2.077597e+00 28820 1 + 174 1.325763e+05 1.443374e+05 3.090309e+00 42388 1 + 225 1.785132e+05 1.443374e+05 4.096940e+00 54300 1 + 279 1.046605e+05 1.443374e+05 5.102572e+00 65748 1 + 288 1.135447e+05 1.443374e+05 5.356144e+00 67656 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.050865e+01 -total solves : 111620 +total time (s) : 5.356144e+00 +total solves : 67656 best bound : 1.443374e+05 -simulation ci : 1.444482e+05 ± 2.751330e+03 +simulation ci : 1.441118e+05 ± 3.704570e+03 numeric issues : 0 ------------------------------------------------------------------- -Confidence interval: 142687.05 ± 3716.06 +Confidence interval: 140837.16 ± 3542.42 Lower bound: 144337.44

Plot the optimal inventory levels:

plt = SDDP.publication_plot(
     simulations;
     title = "x_inventory.out + u_buy.out",
@@ -101,7 +97,7 @@
     ylims = (0, 1_000),
 ) do data
     return data[:x_inventory].out + data[:u_buy].out
-end
Example block output

In the early stages, we indeed recover an order-up-to policy. However, there are end-of-horizon effects as the agent tries to optimize their decision making knowing that they have 10 realizations of demand.

Infinite horizon

We can remove the end-of-horizonn effects by considering an infinite horizon model. We assume a discount factor $\alpha=0.95$:

α = 0.95
+end
Example block output

In the early stages, we indeed recover an order-up-to policy. However, there are end-of-horizon effects as the agent tries to optimize their decision making knowing that they have 10 realizations of demand.

Infinite horizon

We can remove the end-of-horizonn effects by considering an infinite horizon model. We assume a discount factor $\alpha=0.95$:

α = 0.95
 graph = SDDP.LinearGraph(2)
 SDDP.add_edge(graph, 2 => 2, α)
 graph
Root
@@ -171,29 +167,29 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   1.269150e+06  4.703021e+04  2.517104e-02       421   1
-        30   2.976537e+05  2.926072e+05  1.056402e+00     14604   1
-        56   9.668994e+04  3.115604e+05  2.064710e+00     26054   1
-        76   2.051197e+05  3.124173e+05  3.079618e+00     36679   1
-        97   5.345856e+05  3.126419e+05  4.127804e+00     46780   1
-       115   9.010830e+05  3.126594e+05  5.237755e+00     56164   1
-       135   4.574187e+05  3.126642e+05  6.242874e+00     64416   1
-       153   8.147263e+05  3.126649e+05  7.352225e+00     72729   1
-       170   4.383869e+05  3.126650e+05  8.413833e+00     80243   1
-       186   3.350605e+05  3.126650e+05  9.436581e+00     86958   1
-       248   7.271447e+05  3.126650e+05  1.454413e+01    114005   1
-       291   8.546395e+05  3.126650e+05  1.988850e+01    134334   1
-       330   4.186816e+05  3.126650e+05  2.502303e+01    147981   1
-       351   8.329132e+05  3.126650e+05  3.037550e+01    159069   1
-       372   7.599868e+05  3.126650e+05  3.576521e+01    168519   1
-       392   4.926184e+05  3.126650e+05  4.093949e+01    176876   1
-       400   1.508921e+05  3.126650e+05  4.210835e+01    179089   1
+         1   1.207737e+06  4.704379e+04  2.554393e-02       442   1
+        25   3.929922e+05  3.063274e+05  1.061369e+00     14662   1
+        45   4.918127e+04  3.122041e+05  2.580965e+00     23796   1
+        78   1.253479e+05  3.126516e+05  3.590325e+00     34434   1
+       102   8.652224e+05  3.126637e+05  4.656458e+00     44496   1
+       125   1.801968e+05  3.126649e+05  5.671608e+00     53066   1
+       147   5.729555e+05  3.126650e+05  6.730793e+00     60921   1
+       166   8.034395e+05  3.126650e+05  7.818564e+00     68563   1
+       182   4.155658e+05  3.126650e+05  8.862454e+00     75740   1
+       195   4.576289e+05  3.126650e+05  9.894990e+00     82221   1
+       252   3.248711e+05  3.126650e+05  1.499934e+01    106869   1
+       287   1.328155e+06  3.126650e+05  2.066777e+01    126938   1
+       317   5.672289e+05  3.126650e+05  2.578931e+01    140345   1
+       348   4.242395e+05  3.126650e+05  3.101493e+01    152136   1
+       370   7.337974e+05  3.126650e+05  3.615803e+01    162931   1
+       391   3.119868e+05  3.126650e+05  4.120424e+01    173599   1
+       400   2.543868e+05  3.126650e+05  4.317341e+01    177430   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 4.210835e+01
-total solves   : 179089
+total time (s) : 4.317341e+01
+total solves   : 177430
 best bound     :  3.126650e+05
-simulation ci  :  3.267826e+05 ± 2.594249e+04
+simulation ci  :  3.219091e+05 ± 2.988930e+04
 numeric issues : 0
 -------------------------------------------------------------------

Plot the optimal inventory levels:

plt = SDDP.publication_plot(
     simulations;
@@ -204,4 +200,4 @@
 ) do data
     return data[:x_inventory].out + data[:u_buy].out
 end
-Plots.hline!(plt, [662]; label = "Analytic solution")
Example block output

We again recover an order-up-to policy. The analytic solution is to order-up-to 662 units. We do not precisely recover this solution because we used a sample average approximation of 20 elements. If we increased the number of samples, our solution would approach the analytic solution.

+Plots.hline!(plt, [662]; label = "Analytic solution")Example block output

We again recover an order-up-to policy. The analytic solution is to order-up-to 662 units. We do not precisely recover this solution because we used a sample average approximation of 20 elements. If we increased the number of samples, our solution would approach the analytic solution.

diff --git a/previews/PR810/tutorial/markov_uncertainty/index.html b/previews/PR810/tutorial/markov_uncertainty/index.html index 5a2051426..8ec58ccc2 100644 --- a/previews/PR810/tutorial/markov_uncertainty/index.html +++ b/previews/PR810/tutorial/markov_uncertainty/index.html @@ -85,14 +85,14 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.875000e+04 1.991887e+03 5.095005e-03 18 1 - 40 5.000000e+03 8.072917e+03 1.326931e-01 1320 1 + 1 9.375000e+03 1.991887e+03 5.294085e-03 18 1 + 40 1.875000e+03 8.072917e+03 1.307061e-01 1320 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.326931e-01 +total time (s) : 1.307061e-01 total solves : 1320 best bound : 8.072917e+03 -simulation ci : 5.763897e+03 ± 1.456483e+03 +simulation ci : 5.893516e+03 ± 1.634605e+03 numeric issues : 0 -------------------------------------------------------------------

Instead of performing a Monte Carlo simulation like the previous tutorials, we may want to simulate one particular sequence of noise realizations. This historical simulation can also be conducted by passing a SDDP.Historical sampling scheme to the sampling_scheme keyword of the SDDP.simulate function.

We can confirm that the historical sequence of nodes was visited by querying the :node_index key of the simulation results.

simulations = SDDP.simulate(
     model;
@@ -106,4 +106,4 @@
 [stage[:node_index] for stage in simulations[1]]
3-element Vector{Tuple{Int64, Int64}}:
  (1, 1)
  (2, 2)
- (3, 1)
+ (3, 1) diff --git a/previews/PR810/tutorial/mdps/index.html b/previews/PR810/tutorial/mdps/index.html index 02b376308..030fb7ec3 100644 --- a/previews/PR810/tutorial/mdps/index.html +++ b/previews/PR810/tutorial/mdps/index.html @@ -61,11 +61,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.499895e+01 1.562631e+00 1.627302e-02 6 1 - 40 8.333333e+00 8.333333e+00 6.881430e-01 246 1 + 1 2.499895e+01 1.562631e+00 1.644802e-02 6 1 + 40 8.333333e+00 8.333333e+00 6.874740e-01 246 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.881430e-01 +total time (s) : 6.874740e-01 total solves : 246 best bound : 8.333333e+00 simulation ci : 8.810723e+00 ± 8.167195e-01 @@ -154,14 +154,14 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 8.100000e+00 2.646923e-03 5 1 - 40 4.000000e+00 6.561000e+00 6.804099e-01 2790 1 + 1 0.000000e+00 1.000000e+01 6.258965e-03 17 1 + 40 0.000000e+00 6.561000e+00 7.429550e-01 2778 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.804099e-01 -total solves : 2790 +total time (s) : 7.429550e-01 +total solves : 2778 best bound : 6.561000e+00 -simulation ci : 5.875000e+00 ± 2.488335e+00 +simulation ci : 8.575000e+00 ± 3.244899e+00 numeric issues : 0 -------------------------------------------------------------------

Simulating a cyclic policy graph requires an explicit sampling_scheme that does not terminate early based on the cycle probability:

simulations = SDDP.simulate(
     model,
@@ -179,4 +179,4 @@
 
 print(join([join(path[i, :], ' ') for i in 1:size(path, 1)], '\n'))
1 2 3 ⋅
 ⋅ ▩ 4 †
-† ⋅ 5 *
Tip

This formulation will likely struggle as the number of cells in the maze increases. Can you think of an equivalent formulation that uses fewer state variables?

+† ⋅ 5 *
Tip

This formulation will likely struggle as the number of cells in the maze increases. Can you think of an equivalent formulation that uses fewer state variables?

diff --git a/previews/PR810/tutorial/objective_states/index.html b/previews/PR810/tutorial/objective_states/index.html index 015e72fd8..d9a172915 100644 --- a/previews/PR810/tutorial/objective_states/index.html +++ b/previews/PR810/tutorial/objective_states/index.html @@ -79,26 +79,24 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 8.505000e+03 3.261069e+03 2.157211e-02 39 1 - 245 7.200000e+03 5.092593e+03 1.023995e+00 11355 1 - 456 2.812500e+03 5.092593e+03 2.026049e+00 20184 1 - 494 9.453125e+03 5.092593e+03 2.184438e+00 21666 1 + 1 1.757812e+03 3.181818e+03 2.242303e-02 39 1 + 66 3.918750e+03 5.085973e+03 3.104100e-01 3474 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.184438e+00 -total solves : 21666 -best bound : 5.092593e+03 -simulation ci : 5.137930e+03 ± 3.372036e+02 +total time (s) : 3.104100e-01 +total solves : 3474 +best bound : 5.085973e+03 +simulation ci : 4.799623e+03 ± 8.870322e+02 numeric issues : 0 ------------------------------------------------------------------- Finished training and simulating.

To demonstrate how the objective states are updated, consider the sequence of noise observations:

[stage[:noise_term] for stage in simulations[1]]
3-element Vector{@NamedTuple{fuel::Float64, inflow::Float64}}:
- (fuel = 0.75, inflow = 50.0)
- (fuel = 0.75, inflow = 50.0)
- (fuel = 1.1, inflow = 100.0)

This, the fuel cost in the first stage should be 0.75 * 50 = 37.5. The fuel cost in the second stage should be 1.1 * 37.5 = 41.25. The fuel cost in the third stage should be 0.75 * 41.25 = 30.9375.

To confirm this, the values of the objective state in a simulation can be queried using the :objective_state key.

[stage[:objective_state] for stage in simulations[1]]
3-element Vector{Float64}:
- 37.5
- 28.125
- 30.937500000000004

Multi-dimensional objective states

You can construct multi-dimensional price processes using NTuples. Just replace every scalar value associated with the objective state by a tuple. For example, initial_value = 1.0 becomes initial_value = (1.0, 2.0).

Here is an example:

model = SDDP.LinearPolicyGraph(;
+ (fuel = 1.1, inflow = 100.0)
+ (fuel = 1.1, inflow = 50.0)
+ (fuel = 1.1, inflow = 0.0)

This, the fuel cost in the first stage should be 0.75 * 50 = 37.5. The fuel cost in the second stage should be 1.1 * 37.5 = 41.25. The fuel cost in the third stage should be 0.75 * 41.25 = 30.9375.

To confirm this, the values of the objective state in a simulation can be queried using the :objective_state key.

[stage[:objective_state] for stage in simulations[1]]
3-element Vector{Float64}:
+ 55.00000000000001
+ 60.500000000000014
+ 66.55000000000003

Multi-dimensional objective states

You can construct multi-dimensional price processes using NTuples. Just replace every scalar value associated with the objective state by a tuple. For example, initial_value = 1.0 becomes initial_value = (1.0, 2.0).

Here is an example:

model = SDDP.LinearPolicyGraph(;
     stages = 3,
     sense = :Min,
     lower_bound = 0.0,
@@ -172,18 +170,18 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   4.250000e+03  2.663219e+03  2.413511e-02        39   1
-        78   1.000000e+04  5.135984e+03  4.012141e-01      3942   1
+         1   8.687500e+03  2.123386e+03  2.354884e-02        39   1
+        52   1.562500e+03  5.135984e+03  2.527518e-01      2628   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 4.012141e-01
-total solves   : 3942
+total time (s) : 2.527518e-01
+total solves   : 2628
 best bound     :  5.135984e+03
-simulation ci  :  5.347281e+03 ± 8.138305e+02
+simulation ci  :  4.408869e+03 ± 1.067981e+03
 numeric issues : 0
 -------------------------------------------------------------------
 
 Finished training and simulating.

This time, since our objective state is two-dimensional, the objective states are tuples with two elements:

[stage[:objective_state] for stage in simulations[1]]
3-element Vector{Tuple{Float64, Float64}}:
  (55.0, 50.0)
- (52.5, 55.0)
- (61.25, 52.5)

Warnings

There are number of things to be aware of when using objective states.

  • The key assumption is that price is independent of the states and actions in the model.

    That means that the price cannot appear in any @constraints. Nor can you use any @variables in the update function.

  • Choosing an appropriate Lipschitz constant is difficult.

    The points discussed in Choosing an initial bound are relevant. The Lipschitz constant should not be chosen as large as possible (since this will help with convergence and the numerical issues discussed above), but if chosen to small, it may cut of the feasible region and lead to a sub-optimal solution.

  • You need to ensure that the cost-to-go function is concave with respect to the objective state before the update.

    If the update function is linear, this is always the case. In some situations, the update function can be nonlinear (e.g., multiplicative as we have above). In general, placing constraints on the price (e.g., clamp(price, 0, 1)) will destroy concavity. Caveat emptor. It's up to you if this is a problem. If it isn't you'll get a good heuristic with no guarantee of global optimality.

+ (47.5, 55.0) + (48.75, 47.5)

Warnings

There are number of things to be aware of when using objective states.

  • The key assumption is that price is independent of the states and actions in the model.

    That means that the price cannot appear in any @constraints. Nor can you use any @variables in the update function.

  • Choosing an appropriate Lipschitz constant is difficult.

    The points discussed in Choosing an initial bound are relevant. The Lipschitz constant should not be chosen as large as possible (since this will help with convergence and the numerical issues discussed above), but if chosen to small, it may cut of the feasible region and lead to a sub-optimal solution.

  • You need to ensure that the cost-to-go function is concave with respect to the objective state before the update.

    If the update function is linear, this is always the case. In some situations, the update function can be nonlinear (e.g., multiplicative as we have above). In general, placing constraints on the price (e.g., clamp(price, 0, 1)) will destroy concavity. Caveat emptor. It's up to you if this is a problem. If it isn't you'll get a good heuristic with no guarantee of global optimality.

diff --git a/previews/PR810/tutorial/objective_uncertainty/index.html b/previews/PR810/tutorial/objective_uncertainty/index.html index 4649c198f..1162e87ae 100644 --- a/previews/PR810/tutorial/objective_uncertainty/index.html +++ b/previews/PR810/tutorial/objective_uncertainty/index.html @@ -82,16 +82,16 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.375000e+04 5.735677e+03 3.618002e-03 12 1 - 40 1.125000e+04 1.062500e+04 6.784797e-02 642 1 + 1 1.562500e+04 3.958333e+03 4.122019e-03 12 1 + 40 2.437500e+04 1.062500e+04 6.924701e-02 642 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.784797e-02 +total time (s) : 6.924701e-02 total solves : 642 best bound : 1.062500e+04 -simulation ci : 1.148327e+04 ± 2.624878e+03 +simulation ci : 1.076202e+04 ± 2.592898e+03 numeric issues : 0 ------------------------------------------------------------------- -Confidence interval: 10831.25 ± 735.5 -Lower bound: 10625.0 +Confidence interval: 10830.0 ± 778.99 +Lower bound: 10625.0 diff --git a/previews/PR810/tutorial/pglib_opf/index.html b/previews/PR810/tutorial/pglib_opf/index.html index dc50b6536..b7fdce630 100644 --- a/previews/PR810/tutorial/pglib_opf/index.html +++ b/previews/PR810/tutorial/pglib_opf/index.html @@ -61,24 +61,25 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.613400e+06 8.488492e+04 1.760440e-01 43 1 - 3 1.433283e+06 3.495291e+05 2.528774e+00 433 1 - 8 3.192907e+05 4.044829e+05 3.745779e+00 660 1 - 10 2.681262e+05 4.142032e+05 4.672973e+00 794 1 + 1 7.393997e+04 4.830356e+04 8.193421e-02 19 1 + 4 2.709192e+06 3.546782e+05 1.305999e+00 224 1 + 5 1.044890e+06 3.860311e+05 2.347426e+00 407 1 + 8 5.564860e+05 4.165025e+05 3.568584e+00 592 1 + 10 8.739944e+04 4.209037e+05 3.781372e+00 626 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.672973e+00 -total solves : 794 -best bound : 4.142032e+05 -simulation ci : 1.208045e+06 ± 9.580998e+05 +total time (s) : 3.781372e+00 +total solves : 626 +best bound : 4.209037e+05 +simulation ci : 5.168449e+05 ± 5.175064e+05 numeric issues : 0 -------------------------------------------------------------------

To more accurately simulate the dynamics of the problem, a common approach is to write the cuts representing the policy to a file, and then read them into a non-convex model:

SDDP.write_cuts_to_file(convex, "convex.cuts.json")
 non_convex = build_model(PowerModels.ACPPowerModel)
 SDDP.read_cuts_from_file(non_convex, "convex.cuts.json")

Now we can simulate non_convex to evaluate the policy.

result = SDDP.simulate(non_convex, 1)
1-element Vector{Vector{Dict{Symbol, Any}}}:
- [Dict(:bellman_term => 396905.8160278323, :noise_term => 2, :node_index => 1, :stage_objective => 18075.516157803242, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 405593.9330474827, :noise_term => 0, :node_index => 1, :stage_objective => 18075.524375402885, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 404464.1789176448, :noise_term => 5, :node_index => 1, :stage_objective => 18075.521443474252, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 403334.42638901237, :noise_term => 5, :node_index => 1, :stage_objective => 18075.519842267324, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407574.0489648888, :noise_term => 2, :node_index => 1, :stage_objective => 18597.290891936373, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 406444.2869164994, :noise_term => 5, :node_index => 1, :stage_objective => 18075.52936203048, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 405314.5307962587, :noise_term => 5, :node_index => 1, :stage_objective => 18075.523433878952, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 404184.7771405804, :noise_term => 5, :node_index => 1, :stage_objective => 18075.52096931416, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 403055.02490759455, :noise_term => 5, :node_index => 1, :stage_objective => 18075.519546620482, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407574.0644120193, :noise_term => 0, :node_index => 1, :stage_objective => 22685.30797585195, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407574.06441215996, :noise_term => 2, :node_index => 1, :stage_objective => 23580.697454651905, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407574.06441215996, :noise_term => 2, :node_index => 1, :stage_objective => 23580.69745486107, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 436050.851554533, :noise_term => 0, :node_index => 1, :stage_objective => 27420.55350546502, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407574.06441248837, :noise_term => 2, :node_index => 1, :stage_objective => 25694.765153620345, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407574.06441215996, :noise_term => 2, :node_index => 1, :stage_objective => 23580.69745534967, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407574.06441215996, :noise_term => 2, :node_index => 1, :stage_objective => 23580.69745486107, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407574.06441215996, :noise_term => 2, :node_index => 1, :stage_objective => 23580.697454861056, :objective_state => nothing, :belief => Dict(1 => 1.0))]

A problem with reading and writing the cuts to file is that the cuts have been generated from trial points of the convex model. Therefore, the policy may be arbitrarily bad at points visited by the non-convex model.

Training a non-convex model

We can also build and train a non-convex formulation of the optimal power flow problem.

The problem with the non-convex model is that because it is non-convex, SDDP.jl may find a sub-optimal policy. Therefore, it may over-estimate the true cost of operation.

non_convex = build_model(PowerModels.ACPPowerModel)
+ [Dict(:bellman_term => 403916.6955397577, :noise_term => 2, :node_index => 1, :stage_objective => 17578.243350140096, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 408256.24155392824, :noise_term => 2, :node_index => 1, :stage_objective => 17582.4617054468, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407716.66783103475, :noise_term => 5, :node_index => 1, :stage_objective => 17582.46170437397, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 411494.04842447804, :noise_term => 2, :node_index => 1, :stage_objective => 18482.800515946776, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 410921.43686672486, :noise_term => 5, :node_index => 1, :stage_objective => 17586.985061974996, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 411494.0486066968, :noise_term => 2, :node_index => 1, :stage_objective => 22615.417990815142, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 411494.04860679747, :noise_term => 2, :node_index => 1, :stage_objective => 23580.69745470311, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 411494.04860679747, :noise_term => 2, :node_index => 1, :stage_objective => 23580.69745486105, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 422142.58036052994, :noise_term => 0, :node_index => 1, :stage_objective => 27420.5534954732, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 411494.048423558, :noise_term => 5, :node_index => 1, :stage_objective => 18341.12096073179, :objective_state => nothing, :belief => Dict(1 => 1.0))  …  Dict(:bellman_term => 411494.0486069164, :noise_term => 2, :node_index => 1, :stage_objective => 24727.30631257743, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 422142.58036147023, :noise_term => 0, :node_index => 1, :stage_objective => 27420.5534954732, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 411494.0486070162, :noise_term => 2, :node_index => 1, :stage_objective => 25694.765181610703, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 410921.43704711687, :noise_term => 5, :node_index => 1, :stage_objective => 17586.98506197501, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 410348.8255033482, :noise_term => 5, :node_index => 1, :stage_objective => 17586.98504838317, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 412402.3856348547, :noise_term => 0, :node_index => 1, :stage_objective => 27420.553495473203, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 411028.23663841106, :noise_term => 5, :node_index => 1, :stage_objective => 17586.985063528937, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 410455.62508898106, :noise_term => 5, :node_index => 1, :stage_objective => 17586.98505404458, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 413310.72284866474, :noise_term => 0, :node_index => 1, :stage_objective => 27420.553495473207, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 425997.74794727546, :noise_term => 0, :node_index => 1, :stage_objective => 27420.5535018928, :objective_state => nothing, :belief => Dict(1 => 1.0))]

A problem with reading and writing the cuts to file is that the cuts have been generated from trial points of the convex model. Therefore, the policy may be arbitrarily bad at points visited by the non-convex model.

Training a non-convex model

We can also build and train a non-convex formulation of the optimal power flow problem.

The problem with the non-convex model is that because it is non-convex, SDDP.jl may find a sub-optimal policy. Therefore, it may over-estimate the true cost of operation.

non_convex = build_model(PowerModels.ACPPowerModel)
 SDDP.train(non_convex; iteration_limit = 10)
 result = SDDP.simulate(non_convex, 1)
1-element Vector{Vector{Dict{Symbol, Any}}}:
- [Dict(:bellman_term => 378045.6020773528, :noise_term => 2, :node_index => 1, :stage_objective => 21433.375505711214, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 372633.21165766753, :noise_term => 5, :node_index => 1, :stage_objective => 21433.375505711207, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 374352.97756091674, :noise_term => 2, :node_index => 1, :stage_objective => 21433.375505711207, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 380827.51434612233, :noise_term => 0, :node_index => 1, :stage_objective => 21433.375505711218, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 387302.05113132787, :noise_term => 0, :node_index => 1, :stage_objective => 21433.37550571124, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 392051.28360087663, :noise_term => 0, :node_index => 1, :stage_objective => 23587.62076876035, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 386638.89317715605, :noise_term => 5, :node_index => 1, :stage_objective => 21433.37550571124, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381226.5027574708, :noise_term => 5, :node_index => 1, :stage_objective => 21433.375505711225, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 387701.0395426763, :noise_term => 0, :node_index => 1, :stage_objective => 21433.375505711247, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 389420.8054459255, :noise_term => 2, :node_index => 1, :stage_objective => 21433.37550571126, :objective_state => nothing, :belief => Dict(1 => 1.0))  …  Dict(:bellman_term => 371615.302401463, :noise_term => 5, :node_index => 1, :stage_objective => 21433.37550571121, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 373335.06830471224, :noise_term => 2, :node_index => 1, :stage_objective => 21433.375505711218, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 367922.67788502696, :noise_term => 5, :node_index => 1, :stage_objective => 21433.375505711207, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 362510.28746534174, :noise_term => 5, :node_index => 1, :stage_objective => 21433.375505711196, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 368984.8242505472, :noise_term => 0, :node_index => 1, :stage_objective => 21433.375505711214, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 370704.59015379654, :noise_term => 2, :node_index => 1, :stage_objective => 21433.375505711203, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 377179.1269390022, :noise_term => 0, :node_index => 1, :stage_objective => 21433.375505711218, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 378898.89284225134, :noise_term => 2, :node_index => 1, :stage_objective => 21433.375505711225, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 373486.5024225661, :noise_term => 5, :node_index => 1, :stage_objective => 21433.375505711218, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 375206.2683258153, :noise_term => 2, :node_index => 1, :stage_objective => 21433.37550571121, :objective_state => nothing, :belief => Dict(1 => 1.0))]

Combining convex and non-convex models

To summarize, training with the convex model constructs cuts at points that may never be visited by the non-convex model, and training with the non-convex model may construct arbitrarily poor cuts because a key assumption of SDDP is convexity.

As a compromise, we can train a policy using a combination of the convex and non-convex models; we'll use the non-convex model to generate trial points on the forward pass, and we'll use the convex model to build cuts on the backward pass.

convex = build_model(PowerModels.DCPPowerModel)
A policy graph with 1 nodes.
+ [Dict(:bellman_term => 402313.54250397894, :noise_term => 2, :node_index => 1, :stage_objective => 17578.228211652255, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 409670.105316905, :noise_term => 0, :node_index => 1, :stage_objective => 17580.945418973428, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 409141.86965110013, :noise_term => 5, :node_index => 1, :stage_objective => 17580.945417703217, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 408613.6339864217, :noise_term => 5, :node_index => 1, :stage_objective => 17580.945416576527, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 408085.3983232904, :noise_term => 5, :node_index => 1, :stage_objective => 17580.94541502947, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 407557.1626652278, :noise_term => 5, :node_index => 1, :stage_objective => 17580.945409960794, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 410892.26637022005, :noise_term => 0, :node_index => 1, :stage_objective => 23493.35312465374, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 410364.0306987163, :noise_term => 5, :node_index => 1, :stage_objective => 17580.9454224555, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 425732.2537141582, :noise_term => 0, :node_index => 1, :stage_objective => 27420.55350177895, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 410892.2663703151, :noise_term => 2, :node_index => 1, :stage_objective => 24737.86087814375, :objective_state => nothing, :belief => Dict(1 => 1.0))]

Combining convex and non-convex models

To summarize, training with the convex model constructs cuts at points that may never be visited by the non-convex model, and training with the non-convex model may construct arbitrarily poor cuts because a key assumption of SDDP is convexity.

As a compromise, we can train a policy using a combination of the convex and non-convex models; we'll use the non-convex model to generate trial points on the forward pass, and we'll use the convex model to build cuts on the backward pass.

convex = build_model(PowerModels.DCPPowerModel)
A policy graph with 1 nodes.
  Node indices: 1
 
non_convex = build_model(PowerModels.ACPPowerModel)
A policy graph with 1 nodes.
  Node indices: 1
@@ -113,15 +114,15 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   7.378006e+04  6.623885e+04  1.027219e-01        15   1
-         5   1.866414e+05  1.813190e+05  1.201364e+00       132   1
-         9   1.347272e+06  3.470092e+05  3.516584e+00       369   1
-        10   2.020707e+05  3.574171e+05  3.800729e+00       399   1
+         1   8.713702e+05  4.874882e+04  1.606112e-01        27   1
+         4   1.214819e+06  3.956505e+05  2.514511e+00       231   1
+         8   3.098552e+06  4.127304e+05  4.020038e+00       387   1
+        10   9.173249e+05  4.228124e+05  5.520834e+00       534   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.800729e+00
-total solves   : 399
-best bound     :  3.574171e+05
-simulation ci  :  3.047251e+05 ± 2.345430e+05
+total time (s) : 5.520834e+00
+total solves   : 534
+best bound     :  4.228124e+05
+simulation ci  :  7.800745e+05 ± 6.001875e+05
 numeric issues : 0
--------------------------------------------------------------------

In practice, if we were to simulate non_convex now, we should obtain a better policy than either of the two previous approaches.

+-------------------------------------------------------------------

In practice, if we were to simulate non_convex now, we should obtain a better policy than either of the two previous approaches.

diff --git a/previews/PR810/tutorial/plotting/0a447071.svg b/previews/PR810/tutorial/plotting/429b743e.svg similarity index 84% rename from previews/PR810/tutorial/plotting/0a447071.svg rename to previews/PR810/tutorial/plotting/429b743e.svg index 06b3ee8be..0214da6bf 100644 --- a/previews/PR810/tutorial/plotting/0a447071.svg +++ b/previews/PR810/tutorial/plotting/429b743e.svg @@ -1,84 +1,84 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR810/tutorial/plotting/index.html b/previews/PR810/tutorial/plotting/index.html index afacee2de..625c5b3e0 100644 --- a/previews/PR810/tutorial/plotting/index.html +++ b/previews/PR810/tutorial/plotting/index.html @@ -76,14 +76,14 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.875000e+04 1.991887e+03 1.427794e-02 18 1 - 20 1.875000e+03 8.072917e+03 5.452800e-02 360 1 + 1 5.625000e+04 1.991887e+03 1.461601e-02 18 1 + 20 1.875000e+03 8.072917e+03 4.938006e-02 360 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 5.452800e-02 +total time (s) : 4.938006e-02 total solves : 360 best bound : 8.072917e+03 -simulation ci : 1.042034e+04 ± 3.235302e+03 +simulation ci : 8.927233e+03 ± 5.372277e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -106,4 +106,4 @@ xlabel = "Stage", ylims = (0, 200), layout = (1, 2), -)Example block output

You can save this plot as a PDF using the Plots.jl function savefig:

Plots.savefig("my_picture.pdf")

Plotting the value function

You can obtain an object representing the value function of a node using SDDP.ValueFunction.

V = SDDP.ValueFunction(model[(1, 1)])
A value function for node (1, 1)

The value function can be evaluated using SDDP.evaluate.

SDDP.evaluate(V; volume = 1)
(23019.270833333332, Dict(:volume => -157.8125))

evaluate returns the height of the value function, and a subgradient with respect to the convex state variables.

You can also plot the value function using SDDP.plot

SDDP.plot(V, volume = 0:200, filename = "value_function.html")

This should open a webpage that looks like this one.

Convergence dashboard

If the text-based logging isn't to your liking, you can open a visualization of the training by passing dashboard = true to SDDP.train.

SDDP.train(model; dashboard = true)

By default, dashboard = false because there is an initial overhead associated with opening and preparing the plot.

Warning

The dashboard is experimental. There are known bugs associated with it, e.g., SDDP.jl#226.

+)Example block output

You can save this plot as a PDF using the Plots.jl function savefig:

Plots.savefig("my_picture.pdf")

Plotting the value function

You can obtain an object representing the value function of a node using SDDP.ValueFunction.

V = SDDP.ValueFunction(model[(1, 1)])
A value function for node (1, 1)

The value function can be evaluated using SDDP.evaluate.

SDDP.evaluate(V; volume = 1)
(23019.270833333332, Dict(:volume => -157.8125))

evaluate returns the height of the value function, and a subgradient with respect to the convex state variables.

You can also plot the value function using SDDP.plot

SDDP.plot(V, volume = 0:200, filename = "value_function.html")

This should open a webpage that looks like this one.

Convergence dashboard

If the text-based logging isn't to your liking, you can open a visualization of the training by passing dashboard = true to SDDP.train.

SDDP.train(model; dashboard = true)

By default, dashboard = false because there is an initial overhead associated with opening and preparing the plot.

Warning

The dashboard is experimental. There are known bugs associated with it, e.g., SDDP.jl#226.

diff --git a/previews/PR810/tutorial/spaghetti_plot.html b/previews/PR810/tutorial/spaghetti_plot.html index dcace6c1b..dcbf7e1ff 100644 --- a/previews/PR810/tutorial/spaghetti_plot.html +++ b/previews/PR810/tutorial/spaghetti_plot.html @@ -230,7 +230,7 @@
diff --git a/previews/PR810/tutorial/warnings/index.html b/previews/PR810/tutorial/warnings/index.html index a3fa08790..c5da3195e 100644 --- a/previews/PR810/tutorial/warnings/index.html +++ b/previews/PR810/tutorial/warnings/index.html @@ -89,11 +89,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.500000e+00 3.000000e+00 3.223896e-03 6 1 - 5 3.500000e+00 3.500000e+00 6.193876e-03 30 1 + 1 6.500000e+00 3.000000e+00 3.069878e-03 6 1 + 5 3.500000e+00 3.500000e+00 5.754948e-03 30 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 6.193876e-03 +total time (s) : 5.754948e-03 total solves : 30 best bound : 3.500000e+00 simulation ci : 4.100000e+00 ± 1.176000e+00 @@ -134,13 +134,13 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.500000e+00 1.100000e+01 3.274918e-03 6 1 - 5 5.500000e+00 1.100000e+01 5.815029e-03 30 1 + 1 6.500000e+00 1.100000e+01 3.180027e-03 6 1 + 5 5.500000e+00 1.100000e+01 5.640030e-03 30 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 5.815029e-03 +total time (s) : 5.640030e-03 total solves : 30 best bound : 1.100000e+01 simulation ci : 5.700000e+00 ± 3.920000e-01 numeric issues : 0 --------------------------------------------------------------------

How do we tell which is more appropriate? There are a few clues that you should look out for.

  • The bound converges to a value above (if minimizing) the simulated cost of the policy. In this case, the problem is deterministic, so it is easy to tell. But you can also check by performing a Monte Carlo simulation like we did in An introduction to SDDP.jl.

  • The bound converges to different values when we change the bound. This is another clear give-away. The bound provided by the user is only used in the initial iterations. It should not change the value of the converged policy. Thus, if you don't know an appropriate value for the bound, choose an initial value, and then increase (or decrease) the value of the bound to confirm that the value of the policy doesn't change.

  • The bound converges to a value close to the bound provided by the user. This varies between models, but notice that 11.0 is quite close to 10.0 compared with 3.5 and 0.0.

+-------------------------------------------------------------------

How do we tell which is more appropriate? There are a few clues that you should look out for.

  • The bound converges to a value above (if minimizing) the simulated cost of the policy. In this case, the problem is deterministic, so it is easy to tell. But you can also check by performing a Monte Carlo simulation like we did in An introduction to SDDP.jl.

  • The bound converges to different values when we change the bound. This is another clear give-away. The bound provided by the user is only used in the initial iterations. It should not change the value of the converged policy. Thus, if you don't know an appropriate value for the bound, choose an initial value, and then increase (or decrease) the value of the bound to confirm that the value of the policy doesn't change.

  • The bound converges to a value close to the bound provided by the user. This varies between models, but notice that 11.0 is quite close to 10.0 compared with 3.5 and 0.0.