SpineOpt.jl
SpineOpt.jl copied to clipboard
setting of fixed_om_costs severly limiting modeling horizon/performance
Using the parameter fom_cost leads to an enormous increase in the _set_objective! runtime. For a modeling time of 1M: (run_spineopt_standart.jl)
_create_objective_terms! without fom_costs: 19s _create_objective_terms! with fom_cost: 8min13s (the term fixed_om_costs makes 8min10s of these 8min13s)
I'll attach two julia outputs generated by using basically the same DB, only differing in the inclusion of fom_cost. One with FOM costs included for 8 units and the second without FOM costs. (also attached: json, sqlite) 20231213 git issue fom cost.zip
As a comparison i did a 6M run which leads to significant time increases:
Setting objective...386066.236953 seconds (28.52 G allocations: 1.454 TiB, 0.10% gc time, 0.00% compilation time: <1% of which was recompilation) Execution complete. Started at 2023-12-07T18:46:19.694, ended at 2023-12-12T06:26:56.829, elapsed time: 4 days, 11 hours, 40 minutes, 37 seconds, 135 milliseconds
So out of 107.7h total execution 107.2h were used for setting the objective terms.
This may also be related/similar to https://github.com/spine-tools/SpineOpt.jl/issues/72 Is it possible to change the definition of the parameter to allow inclusion in longer modeled times?
@Tasqu this seems like a low hanging fruit performance-wise - but does it give us clues how to improve things more generally?
I gave a try: commit cec478f seems to reduce the time.
Bear in mind that with this fix we assume all available units are supposed to be used in operation, i.e. a fom_cost
is charged on them no matter whether they are actually online or not.
Hello everyone - as discussed in the Spine call on Tuesday here are some findings or measures we took to improve the performance. The parameters and constraints that we didnt necessarily need and how we commented them out (this is just a copy from the code - the parameters and constraints in question have a # before them):
In the total_costs.jl
const invest_terms = [ :unit_investment_costs, :connection_investment_costs, :storage_investment_costs, :mp_objective_penalties ] const op_terms = [ :variable_om_costs, :fixed_om_costs, :taxes, :fuel_costs, # :start_up_costs, # :shut_down_costs, :objective_penalties, :connection_flow_costs, # :renewable_curtailment_costs, # :res_proc_costs, # :ramp_costs, # :units_on_costs, ] const all_objective_terms = [op_terms; invest_terms]
In the run_spineopt_standard.jl
function _add_variables!(m; add_user_variables=m -> nothing, log_level=3) for add_variable! in ( add_variable_units_available!, add_variable_units_on!, add_variable_units_started_up!, add_variable_units_shut_down!, add_variable_unit_flow!, add_variable_unit_flow_op!, add_variable_unit_flow_op_active!, add_variable_connection_flow!, # add_variable_connection_intact_flow!, add_variable_connections_invested!, add_variable_connections_invested_available!, add_variable_connections_decommissioned!, add_variable_storages_invested!, add_variable_storages_invested_available!, add_variable_storages_decommissioned!, add_variable_node_state!, add_variable_node_slack_pos!, add_variable_node_slack_neg!, add_variable_node_injection!, add_variable_units_invested!, add_variable_units_invested_available!, add_variable_units_mothballed!, # add_variable_ramp_up_unit_flow!, # add_variable_start_up_unit_flow!, # add_variable_nonspin_units_started_up!, # add_variable_nonspin_ramp_up_unit_flow!, # add_variable_ramp_down_unit_flow!, # add_variable_shut_down_unit_flow!, # add_variable_nonspin_units_shut_down!, # add_variable_nonspin_ramp_down_unit_flow!, # add_variable_node_pressure!, # add_variable_node_voltage_angle!, # add_variable_binary_gas_connection_flow!, )
We've been starting to look into the performance bottlenecks in SpineOpt in general, since they are occasionally becoming actual barriers for use at this point. Hopefully @manuelma and me will find the time to properly look into things during the following months, but based on our preliminary suspicions significant speedups might require quite large overhauls of some of the model interiors, which will take time to implement and test properly.
One of the things I think we should do is implement higher level switches ... for example if particular parmeters are not defined or used at all, there is no point looping through all the indices to figure that out. During the early user call - users had to resort to the effective measure of commenting out constraints that weren't needed. We should be able to figure out what constraints are not used without having to loop through all the indices. Failing that - we should provide method parameters to switch constraints in and out regardless of what parameters are or aren't defined.
This would all be relatively easy stuff to do and shouldn't require @manuelma whose time is very constrained at the moment
Also we tried whether solving a small model before the main model (warm start) would improve the performance. The main solve is in deed faster in comparison to cold start direct solve. Nevertheless, the time gained is more then lost while presolving with a small model. Therefore it does not seem to be a suitable solution. Attached you can find the Data from some test runs in comparison. Also this is the tool definition for the runSpineOpt.jl:
@OliverLinsel that's consistent with what we have found. However, this isn't the same as creating a pre-compiled system image for SpineOpt. If you do this, then you get the improved performance even the first time. It's worth experimenting. There is a thread discussing how to do this here: https://github.com/spine-tools/Spine-Toolbox/issues/2451
@OliverLinsel it looks like you're using an old version of SpineOpt. You still have the extra ramp flow variables and the ramp costs, which are not in latest master. It would be good to know if the problems arise in latest master, if you get the chance?