message_ix
message_ix copied to clipboard
Correct MACRO GDP reporting & update docs
This PR:
-
Corrects the reporting of GDP which includes the deduction of
trade_costs
coming from MESSAGE. -
Updates MESSAGEix based total_cost parameter in iteration with MACRO including the trade balance.
-
Extends the documentation of the capital constraint section in MACRO to explain the reasoning behind the changes.
PR #422 can be closed after merging this.
- [x] Continuous integration checks all ✅
- [x] Add or expand tests; coverage checks both ✅
- [x] Add, expand, or update documentation.
- [ ] Update release notes.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 95.4%. Comparing base (
aabb789
) to head (5dfd483
).
Additional details and impacted files
@@ Coverage Diff @@
## main #430 +/- ##
=====================================
Coverage 95.4% 95.4%
=====================================
Files 46 46
Lines 4354 4354
=====================================
Hits 4156 4156
Misses 198 198
Hi @GamzeUnlu95 - I understand there are still some requirements left for this to be merged. Would you say it is available to try to use on a trial basis?
Hi folks, I wanted to report back after having tried to use this PR in some of my ongoing work.
Observed issue: Running a mitigation scenario with a calibrated scenario causes MESSAGE LP infeasibility within MESSAGE-MACRO iteration.
Suggestion: Before merging this, run a test case on the current ENGAGE set up where MACRO is recalibrated with these changes and confirm that mitigation cases can still converge.
More specifics:
It's hard for me to pin point exactly why this PR causes my failure, but I am combining a number of updates - I start from the new R15 model, apply 1-year time steps for early periods, and calibrate using this MACRO branch. I did not observe any issues while creating baselines or non-mitigation scenarios.
When I reverted back to current main
, I was able to calibrate and run mitigation scenarios without issue, so strongly suspect these changes were the culprit.
MESSAGE LP infeasibility within MESSAGE-MACRO iteration
Can you please upload a snippet from the GAMS .lst file to indicate which variables are infeasible?
That file has already been overwritten. I can try later to reproduce - can you tell me what to grep
for?
But this is a good opportunity to plug our need for better tooling when dealing with infeasibilities. It would be great to have some first (or even second) class support to say, dump out why a model was infeasible at the end of a run that is persistable.
can you tell me what to
grep
for?
https://github.com/iiasa/message_ix/issues/418#issuecomment-720388779 for example
I have pulled these changes onto the latest main branch, updated the macro-calibration xlsx but then encountered errors.
**** Exec Error at line 544: log: FUNC SINGULAR: x = 0
**** Exec Error at line 544: overflow in * operation (mulop)
Has anyone encountered such issues? Was this PR already tested with the calibration? I have added the macro_run.lst file as a .txt file if this helps with debugging. MACRO_run.txt
I have pulled these changes onto the latest main branch, updated the macro-calibration xlsx but then encountered errors.
**** Exec Error at line 544: log: FUNC SINGULAR: x = 0 **** Exec Error at line 544: overflow in * operation (mulop)
Has anyone encountered such issues? Was this PR already tested with the calibration? I have added the macro_run.lst file as a .txt file if this helps with debugging. MACRO_run.txt
Thanks for adding the .lst file. I attempted to test this PR with calibration a couple of months ago and at that time there was an issue with the macro-calibration xlsx in the main branch and I could not complete it. I need to find and allocate time soon to test this PR again and finalize it.
I tested this PR with the ENGAGE setup by using the model ENGAGE_SSP2_v4.1.7_T4.5_r3.1_test
. I did not experience any issues with the calibration of the baseline. I ran the mitigation case with 600 budget and was able to get a solution for EN_NPi2020_600
. However, in Step 3 of EN_NPi2020_400
, in the third iteration, there is an execution error division by eps as in this MESSAGE-MACRO_run.log and MESSAGE-MACRO_run_lst file. Not sure if this is directly related to the changes that are made in this PR. The line indicated in the error message is the following:
price_diff_rel(iteration,node_macro,sector,year)$( price_init(node_macro,sector,year) ) =
5455 price_diff_abs(iteration,node_macro,sector,year) / price_init(node_macro,sector,year)
At today's message meeting, this PR was suggested to me as a possible fix for #752. As far as I can tell, this PR does not enable usage of user-defined units. However, I rebased this onto current main so if you want to run any tests again to see if this can be merged, this might be a good time, @GamzeUnlu95, @OFR-IIASA, @gidden.
This PR may even predate the PR template, but it is missing one item that appears there: "Update release notes". I've added this to the PR description above.
These notes are especially important when the behaviour and outputs of the core GAMS code change, as in this PR. Users must have an opportunity to notice and understand how their model outputs may be affected by a new release.
As discussed in yesterday's weekly meeting, we hold off on merging these changes because we don't trust our current test suite to adequately verify that the changes won't break anything. As @gidden suggested (please correct me if I misunderstood), we should set up a testing workflow that runs a full emission baseline and a peak budget scenario. Since these runs might take a while, we would ideally keep them optional and either run them as part of nightly tests or enable them on PRs that affect the GAMS code. This would allow us to trust the changes here.
@Jihoon noted that the line currently causing the test failure reported by @GamzeUnlu95 had caused issues before, so was removed locally in his installation without any noticeable negative consequences. No one else in the room knew what this line might be good for, so in a separate PR, we could try to remove it -- and test that with the new test cases proposed above.
Thanks @glatterf42. My suggestion is to follow @khaeru's example here and combine this with @OFR-IIASA's ENGAGE process script here.
I would suggest to run
- a baseline scenario
- a full century budget scenario
- a peak-budget scenario
And compare a few key metrics on the output excel (e.g., co2 emissions, kyoto emissions, gdp, primary energy, final energy).
My suggestion is to follow @khaeru's example here and combine this with @OFR-IIASA's ENGAGE process script here.
While I generally agree with this idea directionally and the basic observation underlying it, to implement these ideas would involve, conservatively, at least 2 person-weeks of work. Either we accept that this PR sits until someone is assigned/freed-up to spend that time, or we find a simpler, manual check (and someone to do it) that can assure that this PR will not change [expected behaviour].
In order to avoid derailing this particular PR with that broader discussion, I'll open a new issue.