Created neutral density example notebook
This notebook still uses the cookbook, and could be parallelised for efficiency (but I don't know how to do the latter).
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Done this fixes, but might wait until I get access to xp65 to convert to intake, and maybe till pygamma gets in the conda environments :) Thanks Navid
Done this fixes, but might wait until I get access to
**xp65**to convert to intake, and maybe till pygamma gets in the conda environments :) Thanks Navid
We'll put pygamma_n into the conda environment -- that should be easy!
~Can you link this PR with the issue by editing the first comment in the PR and write Closes #XYZ where XYZ is the issue number?~
The issue is linked -- never mind!
I don't know how/why I've closed this pull request. To finish the notebook I need to choose between:
- Using the cookbook instead of intake to do it model agnostic
- Using intake but working only with ACCESS-OM2 @navidcy what do you think is best? I can also wait until either those are fixed by (NCI?)
I would stick with the cookbook until we have a MOM6 run in the catalogue (i.e. https://github.com/ACCESS-NRI/access-nri-intake-catalog/issues/175)
It looks like you deleted/renamed the "master" branch in your fork @julia-neme ? I guess you were doing this to try and get your fork up to date.
Because the PR is trying to merge into COSIMA:main from julia-neme:master this has had the side-effect of closing the pull request (I assume) because julia-neme:master doesn't exist anymore.
You can either make a new branch in your fork called 'master' (using commit https://github.com/COSIMA/cosima-recipes/commit/0193c90b7aebbbbf73feb74434ecc12f9d7ba212) and then we can reopen this PR. Or make a new branch from that commit but that would need a new PR.
Thanks a lot Anton!!
@navidcy I have incorporated all changes, and commented out the loading of mom6 using intake. Once it is indexed in the catalog we can just comment it out :)
Can we just not include commented out code at the moment? Just post the code here if you want to have it 'saved' somewhere. Commented code is not good practice, we'll include it when it's tested -- at the moment there is a bit of a "hope" that it'll work, but not actual testing?
Can we just not include commented out code at the moment? Just post the code here if you want to have it 'saved' somewhere. Commented code is not good practice, we'll include it when it's tested -- at the moment there is a bit of a "hope" that it'll work, but not actual testing?
Well, it works loading mom6 with the cookbook :) Just that we've decided to go along with intake? So do you want me to delete all lines that pertain mom6?
Can we just not include commented out code at the moment? Just post the code here if you want to have it 'saved' somewhere. Commented code is not good practice, we'll include it when it's tested -- at the moment there is a bit of a "hope" that it'll work, but not actual testing?
I disagree with @navidcy on this point.
Ideally, the code would be included and automatically tested, but, that's not currently possible for two reasons:
- the intake catalogue doesn't have MOM6 simulations in it yet; and
- we don't have automated testing of any notebooks yet.
Given this, I think it is appropriate to include the commented code.
OK fair!
Given the above, why don't we do the following then: Load the MOM6 output using cosima-cookbook. This way we'll include a code that actually works and produce a figure instead of just commented out code. And when ACCESS-NRI catalog includes MOM6 output we can just change the cosima_cookbook.get_var line.
@julia-neme this notebook needs ~1hr to run????
Could we make it shorter? It's only an example -- doesn't need to be publication ready...
@jemmajeffree is this an case where xarray.apply_ufunc could be really useful?
I'm referring to the cells that do
for time_step in range(0, Nt):
for longitude in range(0, Nx):
do_stuff()
that make this notebook really really slow...
@jemmajeffree is this an case where
xarray.apply_ufunccould be really useful?
Yes! absolutely, it looks like a perfect example. I'll try and get to it in the next week
Can anyone at ANU walk me though how to make changes within the pull request? (this is the main thing I've been avoiding with regards to fixing up the apply_ufuncs example, sorry)
Something to note: I can probably speed it up (don't know by how much until I poke it), but it will come at a cost of less intuitive and readable code. How much do we/this repo value each of these two priorities?
You can look how to review here
After making the changes I think you can push back your modifications into here. I'm having a go at parallelising this. It's running in 1min now, so I guess that's good enough?
Parallelising is also very readable, just one line of code.
@jemmajeffree just make changes and commit to that branch and push and your PR will be updated
I can help you via zoom despite you requested for ANU-onsite help above ;)
After making the changes I think you can push back your modifications into here. I'm having a go at parallelising this. It's running in 1min now, so I guess that's good enough?
Parallelising is also very readable, just one line of code.
1min???? before it was 1hr....! YES that's very OK!!! That's great!
Yep! Do we still want the ufuncs or the parallelisation?
I'm happy to approve something that runs in 1min... But I haven't seen anything pushed... :)
I don't know how to integrate the "tweaks" you've pushed jaja. Trying to figure that out. Without loosing my updates
I can help you with that ;)
FYI: I did made a few changes in the model_args so that now less if model=='momX' statements are needed..
Something to note: I can probably speed it up (don't know by how much until I poke it), but it will come at a cost of less intuitive and readable code. How much do we/this repo value each of these two priorities?
It's a balance...
I'd say a 1.5x-2x speedup in the cost of less intuitive-readable code is not really worth it. But a 30x or 100x speedup it worths it!
Yep! Do we still want the ufuncs or the parallelisation?
If you can parallelise without ufuncs do that. apply_ufuncs is a way of parallelising functions that refuse to do so natively, but you're better off with parallelising within a function if it supports it
I think it's cleaner done with ufunc but this is now great! The whole notebook runs in 4min instead of 2hrs so THAT'S GREAT.
I suggest opening an issue and pointing to this notebook saying that parallelization could be done via xarray.apply_ufunc and this is something somebody could tackle in the future ;)
@julia-neme you may merge this whenever you feel like it!