ixmp
ixmp copied to clipboard
2-platform local clone() exhausts memory
Code sample or context
- Clone a large (e.g. ENGAGE)
message_ix.Scenario
into a ‘local’ platform (i.e. using JDBCBackend with driver="hsqldb"). - Clone from one local platform to another.
Expected result
Clone works.
Problem description
The process is killed for exceeding memory limits.
This appears to only occur under certain conditions:
- On my local machine. I have 16 GB of RAM and use jvmargs=["-Xmx12G"] for the JDBCBackend.
- When using
message_data.model.transport
. It does work when use a few commands on the terminal, like:from ixmp import Platform from message_ix import Scenario s, mp1 = Scenario.from_url("ixmp://local-1/…") mp2 = Platform("local-2") s.clone(platform=mp2) mp2.close_db()
Versions
Latest main
.
Discussion
- The on-disk size of the "local-1" platform, containing 3 such large scenarios, is about 342 MiB, so each scenario is about 100 MiB compressed.
- 100 MiB of data should probably not take up >12 GiB (or >6 GiB) in memory.
- This may be because the Java code in
ixmp_source
loads all contents of the Scenario before performing the clone. - If so, one response could be to clone item-wise.
- The responses could be:
- Modify the Java code to disable this.
- Reimplement cloning in Python.
- (2) will eventually be required in order to support backends other than JDBCBackend.
- Doing (2) would reduce the scope and ease maintenance/replacement of JDBCBackend.
I can confirm the issue. I have tried to do the same and encountered that case (1) as described above doesnt. In addition I think is also worth mentioning that the issue of running out of memory is only encountered after a total of 70 minutes.