Egret
Egret copied to clipboard
Failure when running pytest on models/tests/test_unit_commitment.py
With pyomo master (as of mid last week) and egret master (as of today), I am seeing the following when running "pytest test_unit_commitment" from the "models/tests" sub-directory. I am using gurobi 9.0.2. But this also happens with CBC 2.10.5.
Long version of output is further below - but short version is that the objective function for the model resulting from create_compact_unit_commitment is well off tolerance (by 7-ish asolute cost units).
Could someone - probably @bknueven - attempt to replicate? Thanks!
Long version of output is as follows:
test_unit_commitment.py F.................... [100%]
================================================== FAILURES ================================================== ___________________________________________ test_int_all_uc_models ___________________________________________
@unittest.skipUnless(comm_mip_avail, "Neither Gurobi or CPLEX solver is available")
def test_int_all_uc_models():
_test_uc_model(create_tight_unit_commitment_model)
_test_uc_model(create_compact_unit_commitment_model)
test_unit_commitment.py:86:
uc_model = <function create_compact_unit_commitment_model at 0x10fcba710>, relax = False test_objvals = [4201915.017320504, 5454367.7670904165, 5999272.361123627, 5461120.3231092375, 6062406.32677043]
def _test_uc_model(uc_model, relax=False, test_objvals=test_int_objvals):
for test_case, ref_objval in zip(test_cases, test_objvals):
md_dict = json.load(open(test_case,'r'))
md = ModelData(md_dict)
model = uc_model(md, relaxed=relax)
opt = SolverFactory(test_solver)
_set_options(opt, mipgap=0.0)
if isinstance(opt, PersistentSolver):
opt.set_instance(model)
result = opt.solve(model, tee=False)
assert result.solver.termination_condition == TerminationCondition.optimal
assert math.isclose(ref_objval, result.problem.upper_bound, rel_tol=rel_tol)
E AssertionError: assert False
E + where False =
test_unit_commitment.py:72: AssertionError
To make it clear: I'm happy to dive in and obtain more diagnostics, but would like replication confirmation from someone else prior to doing so.
@jeanpaulwatson I just ran these tests, and they all passed for me... I was also using Gurobi 9.0.2. Hopefully @bknueven can also run this test.
I just updated Pyomo master. I'm not sure if that is making a difference.
Good suggestion regarding updating Pyomo master. I just did the same, and at least the test I was seeing as previously failing is now passing. That's the good news. The bad news is that a new test is failing, this time a UC-with-transmission test. Output trace is below.
@michaelbynum: I would feel much better if I understood what kind of change in Pyomo master might have resulted in an incorrect model being generated - which I think is the only viable explanation for what I was observing (given the very tight tolerance on solution comparisons and the observed absolute cost discrepancy in solutions). Any thoughts? I can't quite convince myself that changes in Pyomo Set behaviors might explain things.
@michaelbynum and @bknueven: Can one or both of you re-pull from Pyomo master and re-execute tests with Gurobi 9.0.2? To see if you're also now seeing the UC-with-transmission test failure.
Finally, for what it's worth, I am installing pyomo master into my conda virtual environment by executing "python setup.py develop". Seems like that should not matter, but I thought I'd mention it just in case.
Test output I am now observing is as follows:
test_unit_commitment.py ................F.... [100%]
================================================= FAILURES ================================================= _______________________________________ test_uc_transmission_models ________________________________________
def test_uc_transmission_models():
## the network tests can optionally specify some kwargs so we can pass them into solve_unit_commitment
tc_networks = {'btheta_power_flow': [dict()], 'ptdf_power_flow':[{'ptdf_options': {'lazy':False}}, dict()], 'power_balance_constraints':[dict()],}
no_network = 'copperplate_power_flow'
test_names = ['tiny_uc_tc', 'tiny_uc_tc_2','tiny_uc_tc_3', 'tiny_uc_tc_4', 'tiny_uc_tc_5', 'tiny_uc_tc_6']
## based on tiny_uc, tiny_uc_tc_2 has an interface, tiny_uc_tc_3 has a relaxed interface, tiny_uc_tc_4 has a relaxed flow limit
for test_name in test_names:
input_json_file_name = os.path.join(current_dir, 'uc_test_instances', test_name+'.json')
md_in = ModelData.read(input_json_file_name)
for tc in tc_networks:
for kwargs in tc_networks[tc]:
md_results = solve_unit_commitment(md_in, solver=test_solver, mipgap=0.0, uc_model_generator = _make_get_dcopf_uc_model(tc), **kwargs)
reference_json_file_name = os.path.join(current_dir, 'uc_test_instances', test_name+'_results.json')
md_reference = ModelData.read(reference_json_file_name)
assert math.isclose(md_reference.data['system']['total_cost'], md_results.data['system']['total_cost'], rel_tol=rel_tol)
E assert False
E + where False =
test_unit_commitment.py:184: AssertionError
@jeanpaulwatson I also ran these tests, though with xpress persistent. They pass for me as well.
I think a lot of the unit commitment tests are not very numerically stable problems. This could explain why you're getting slightly different optimal objective values for different Pyomo versions.