MetPy
MetPy copied to clipboard
Initial code for wind field reconstruction from vorticity and divergence
Description Of Changes
Checklist
- [ ] Complete draft of the code for the issue - https://github.com/Unidata/MetPy/issues/3207
- [ ] Tests are being added
If you're having trouble coming up with tests: perhaps the curl of a vertical vector field for a non-divergent wind, and the gradient of a scalar field for an irrotational wind? That construction should minimize the differences between the reconstructed field and the original one (I think just a constant).
If you're having trouble coming up with tests: perhaps the curl of a vertical vector field for a non-divergent wind, and the gradient of a scalar field for an irrotational wind? That construction should minimize the differences between the reconstructed field and the original one (I think just a constant). @DWesl Thanks for your interest. Do you have a reference for that I can check out ? I would love to come up with more tests as you mention.
If you're having trouble coming up with tests: perhaps the curl of a vertical vector field for a non-divergent wind, and the gradient of a scalar field for an irrotational wind? That construction should minimize the differences between the reconstructed field and the original one (I think just a constant).
Do you have a reference for that I can check out ?
For the curl of a vector field being non-divergent and the gradient of a scalar field being irrotational? I'll see if I can dig up my vector calc book. (EDIT: Just remembered the internet exists. Here's university professors' notes from Whitman, UToronto, and Lamar, as well as a page from Khan Academy that mentions the curl of the gradient)
For two irrotational fields with the same divergence differing by at most a constant, or two non-divergent fields with the same curl also differing by at most a constant? I'm not sure I have a reference, beyond a general idea that reconstructing a field from vorticity and divergence is possible.
Adding the gradient of a scalar function to any vector field will result in a function with the same curl as the original, and likewise adding the curl of a vector function to a vector field will result in a function with the same divergence as the original. I was mostly trying to force those functions to be simple. Of course, this is related to indefinite integration, so there will still be a constant around, unless you specify the boundary conditions to turn it into a definite integral.
I would love to come up with more tests as you mention.
Probably the simpler way, now that I think of it, would be to start with a vorticity or divergence field, recover a velocity field, and then check that the velocity field has the correct vorticity or divergence. I think the big thing there would be ensuring the test resolves the features of the initial field well. For example sin(x) + cos(y/2)
should probably have grid spacing no larger than $\pi/4$ and cover at least $2\pi$ in each direction, and would work better with smaller grid spacing, and might be easier to visually check with a larger extent, that is x, y = np.mgrid[0:15:0.25, 0:15:0.25]
would probably work well.
I'd suggest an example for the gallery, except the examples I can think of are for streamfunction and velocity potential, which are related but different (integrals of velocity instead of derivatives).
If you're having trouble coming up with tests: perhaps the curl of a vertical vector field for a non-divergent wind, and the gradient of a scalar field for an irrotational wind? That construction should minimize the differences between the reconstructed field and the original one (I think just a constant).
Do you have a reference for that I can check out ?
For the curl of a vector field being non-divergent and the gradient of a scalar field being irrotational? I'll see if I can dig up my vector calc book. (EDIT: Just remembered the internet exists. Here's university professors' notes from Whitman, UToronto, and Lamar, as well as a page from Khan Academy that mentions the curl of the gradient)
For two irrotational fields with the same divergence differing by at most a constant, or two non-divergent fields with the same curl also differing by at most a constant? I'm not sure I have a reference, beyond a general idea that reconstructing a field from vorticity and divergence is possible.
Adding the gradient of a scalar function to any vector field will result in a function with the same curl as the original, and likewise adding the curl of a vector function to a vector field will result in a function with the same divergence as the original. I was mostly trying to force those functions to be simple. Of course, this is related to indefinite integration, so there will still be a constant around, unless you specify the boundary conditions to turn it into a definite integral.
I would love to come up with more tests as you mention.
Probably the simpler way, now that I think of it, would be to start with a vorticity or divergence field, recover a velocity field, and then check that the velocity field has the correct vorticity or divergence. I think the big thing there would be ensuring the test resolves the features of the initial field well. For example
sin(x) + cos(y/2)
should probably have grid spacing no larger than π/4 and cover at least 2π in each direction, and would work better with smaller grid spacing, and might be easier to visually check with a larger extent, that isx, y = np.mgrid[0:15:0.25, 0:15:0.25]
would probably work well.I'd suggest an example for the gallery, except the examples I can think of are for streamfunction and velocity potential, which are related but different (integrals of velocity instead of derivatives).
So to summarize what you have written - using my reconstructed non divergent wind velocities calculate the vorticity again using MetPy's vorticity function and secondly using the reconstructed irrotational winds calculate the divergence.
Do a diff between the original voriticy and reconstructed vorticity. Ditto for divergence.
Probably the simpler way, now that I think of it, would be to start with a vorticity or divergence field, recover a velocity field, and then check that the velocity field has the correct vorticity or divergence. I think the big thing there would be ensuring the test resolves the features of the initial field well. For example,
sin(x) + cos(y/2)
should probably have grid spacing no larger than π/4 and cover at least 2π in each direction, and would work better with smaller grid spacing, and might be easier to visually check with a larger extent, that isx, y = np.mgrid[0:15:0.25, 0:15:0.25]
would probably work well.So to summarize what you have written - using my reconstructed non divergent wind velocities calculate the vorticity again using MetPy's vorticity function and secondly using the reconstructed irrotational winds calculate the divergence.
Do a diff between the original voriticy and reconstructed vorticity. Ditto for divergence.
Yep!
There are fancy ways to use most of the same code for both tests, but that's the core.
Just curious, it looks like chmod a+x ci/*requirements.txt
is included in the changes for this PR. Could you expand on why?
Just curious, it looks like
chmod a+x ci/*requirements.txt
is included in the changes for this PR. Could you expand on why?
To be perfectly honest with you I am baffled by that as well. I have made no changes at all to that file. But when I pulled in the latest changes from my MetPy fork to my local repository I got a message saying that those files had some changes and had to be committed first to my local repository before I could pull in the changes from the master. So after I committed I made a push again and those appear to be part of my forked remote repository. If you could override any changes I may have made accidentally that would be appreciated.
Probably the simpler way, now that I think of it, would be to start with a vorticity or divergence field, recover a velocity field, and then check that the velocity field has the correct vorticity or divergence. I think the big thing there would be ensuring the test resolves the features of the initial field well. For example,
sin(x) + cos(y/2)
should probably have grid spacing no larger than π/4 and cover at least 2π in each direction, and would work better with smaller grid spacing, and might be easier to visually check with a larger extent, that isx, y = np.mgrid[0:15:0.25, 0:15:0.25]
would probably work well.So to summarize what you have written - using my reconstructed non divergent wind velocities calculate the vorticity again using MetPy's vorticity function and secondly using the reconstructed irrotational winds calculate the divergence. Do a diff between the original voriticy and reconstructed vorticity. Ditto for divergence.
Yep!
There are fancy ways to use most of the same code for both tests, but that's the core.
@DWesl Regarding the testcase of the the calculation of the reconstructed vorticity I had a question.
I am initializing the return parameters(upsi,vpsi) in the following way -
https://github.com/winash12/MetPy/blob/main/src/metpy/calc/kinematics.py#L1457 but this ensures that the units of upsi and vpsi are dimensionless. In the list of parameters that are provided to this function the uwnd and vwnd quantities are not available. Now I can use this API call https://docs.xarray.dev/en/latest/generated/xarray.DataArray.assign_attrs.html to assign the units. Would that be a sensible way to move forward ?
Of course in order to ensure that this testcase works I can add the units outside this API call in my test case but I am assuming units need to be added to the upsi and vpsi data arrays.
I am initializing the return parameters(upsi,vpsi) in the following way -
https://github.com/winash12/MetPy/blob/main/src/metpy/calc/kinematics.py#L1457 but this ensures that the units of upsi and vpsi are dimensionless. In the list of parameters that are provided to this function the uwnd and vwnd quantities are not available.
The previous function in that file has some decorators that might be useful: https://github.com/Unidata/MetPy/blob/3bf4f99154d17ab3a75b7dd886648787e289cb74/src/metpy/calc/kinematics.py#L1371-L1373
They seem to be defined here: https://github.com/Unidata/MetPy/blob/3e9fbb1a08394026f68bb6e0bd8921287eccc272/src/metpy/calc/tools.py#L1014-L1015 You're doing integrals instead of derivatives, but the principle is the same. https://github.com/Unidata/MetPy/blob/3e9fbb1a08394026f68bb6e0bd8921287eccc272/src/metpy/units.py#L325-L326 https://github.com/Unidata/MetPy/blob/3e9fbb1a08394026f68bb6e0bd8921287eccc272/src/metpy/xarray.py#L1246-L1251
Now I can use this API call https://docs.xarray.dev/en/latest/generated/xarray.DataArray.assign_attrs.html to assign the units. Would that be a sensible way to move forward ?
That is a way to obtain an xarray.DataArray
with units, yes. It returns a new DataArray rather than changing the existing one in-place, so keep that in mind.
Of course in order to ensure that this testcase works I can add the units outside this API call in my test case but I am assuming units need to be added to the upsi and vpsi data arrays.
Users will expect units, yes, that's part of the selling point of MetPy: it catches some mistakes (function arguments in reversed order) before they lead to weird results (negative and decreasing potential temperatures), or worse, results that look normal.
I am initializing the return parameters(upsi,vpsi) in the following way - https://github.com/winash12/MetPy/blob/main/src/metpy/calc/kinematics.py#L1457 but this ensures that the units of upsi and vpsi are dimensionless. In the list of parameters that are provided to this function the uwnd and vwnd quantities are not available.
The previous function in that file has some decorators that might be useful:
https://github.com/Unidata/MetPy/blob/3bf4f99154d17ab3a75b7dd886648787e289cb74/src/metpy/calc/kinematics.py#L1371-L1373
They seem to be defined here:
https://github.com/Unidata/MetPy/blob/3e9fbb1a08394026f68bb6e0bd8921287eccc272/src/metpy/calc/tools.py#L1014-L1015
You're doing integrals instead of derivatives, but the principle is the same. https://github.com/Unidata/MetPy/blob/3e9fbb1a08394026f68bb6e0bd8921287eccc272/src/metpy/units.py#L325-L326
https://github.com/Unidata/MetPy/blob/3e9fbb1a08394026f68bb6e0bd8921287eccc272/src/metpy/xarray.py#L1246-L1251
Now I can use this API call https://docs.xarray.dev/en/latest/generated/xarray.DataArray.assign_attrs.html to assign the units. Would that be a sensible way to move forward ?
That is a way to obtain an
xarray.DataArray
with units, yes. It returns a new DataArray rather than changing the existing one in-place, so keep that in mind.Of course in order to ensure that this testcase works I can add the units outside this API call in my test case but I am assuming units need to be added to the upsi and vpsi data arrays.
Users will expect units, yes, that's part of the selling point of MetPy: it catches some mistakes (function arguments in reversed order) before they lead to weird results (negative and decreasing potential temperatures), or worse, results that look normal.
@DWesl The parse_grid_arguments decorator has some issues that I ran into - https://github.com/Unidata/MetPy/discussions/3503 and I would like to know whether I can postpone that implementation after we get through the verification of the current PR. So I just added the u and v velocities to the original method - https://github.com/winash12/MetPy/blob/main/src/metpy/calc/kinematics.py#L1437
I have computed the difference between the reconstructed vorticity and the original vorticity. How would you want to proceed further now ? Would you like to see a plot of the two ?
I have used the https://numpy.org/doc/stable/reference/generated/numpy.allclose.html function between the two vorticities and that returns a false. But when one does a diff and looks at the results they range from 1e-01 to 1e-05.
Please let me know how you would like to proceed further.
@DWesl The parse_grid_arguments decorator has some issues that I ran into - #3503 and I would like to know whether I can postpone that implementation after we get through the verification of the current PR.
Certainly. Getting it working correctly comes before making it convenient to use.
From the explanation on that issue, it looks like that's designed for you to write a function that takes dx and dy as arguments, then changes that to one that can derive those from latitude, longitude, and CRS, taking those values from an xr.DataArray
if possible.
So I just added the u and v velocities to the original method - https://github.com/winash12/MetPy/blob/main/src/metpy/calc/kinematics.py#L1437
Ah, you needed u and v for the units.
vorticity is a spatial derivative of velocity, so its units are [velocity]/[distance]
: recovering the velocity from the vorticity reverses this, so the units of the recovered velocity will be the units of the vorticity passed in times the units of the grid spacings you use for the integration.
I have computed the difference between the reconstructed vorticity and the original vorticity. How would you want to proceed further now ? Would you like to see a plot of the two ?
Plot would help figure out what's going wrong if the test is failing, overlaying the estimated wind vectors on original and derived vorticity to see where they differ and why that might be.
I have used the https://numpy.org/doc/stable/reference/generated/numpy.allclose.html function between the two vorticities and that returns a false. But when one does a diff and looks at the results they range from 1e-01 to 1e-05.
numpy.testing.assert_allclose
and pytest.approx
should print some of the diff for you. With single-precision/float32 input in a 60-by-60 array, I'd expect around three digits of precision, though with this kind of function that's measured from the largest value in the array. If you're using the sin+cos function I mentioned earlier, that largest value will be around two, and np.allclose(original_vorticity, vorticity_of_recovered_velocity, atol=8e-3)
should be True. If you're using double-precision/float64 values (the default), there's another nine digits of precision, so atol=8e-12
should work.
Of course, your implementation might be better than that quick worst-case estimate, so trying smaller numbers would be worthwhile.
Please let me know how you would like to proceed further.
Where are the larger differences? Mostly near the edges is somewhat expected. Mostly near the center is more of an issue. Mostly near the zero values in the original vorticity field is not unexpected, but still annoying.
Where is the test in the code here? I couldn't find it.
There's also a few more permissions changes here: git stash save
is a quick way to allow git pull
to succeed, then you can run git stash pop
to bring back any changes you were working on. I think git status
shows how to undo changes you didn't mean to make, and git diff --compact-summary
should let you know when the only changes to a file are permissions changes; git status --short
might show the difference between permissions changes and content changes as well.
I forgot that accuracy would also vary with grid spacing. It might be necessary to reduce grid spacing beyond what I said earlier to get $\approx 10$% accuracy.
For the sin(x) + cos(y/2)
test function I proposed earlier, evaluated on grid x, y = np.mgrid[0:15:0.25, 0:15:0.25]
, it looks like the finite-difference derivatives limit accuracy to about the grid spacing, so having the recovered vorticity matching the original vorticity to better than 10% accuracy requires $\Delta x, \Delta y \le 0.10$. This is using the analytic velocity field, derived from the streamfunction -sin(x) - 4 cos(y/2)
.
In other words, using the grid x, y = np.mgrid[0:15:0.10, 0:15:0.10]
might allow np.allclose(recovered_vorticity, original_vorticity, atol=0.10 * original_vorticity.max())
to return True. You might need a bit more fiddling with the grid spacing and expected accuracy for your field to get everything working, since you've also got finite-difference methods to recover the winds (this grid spacing works out to $\approx 100$ grid points per period, if you use different functions).
@DWesl Apologies for the late response as I was trying to finalize whether my functions require the usage of decorators or not. It is not entirely clear.
In the file kinematics.py all the other functions use these 2 decorators - preprocess_and_wrap and check_units and I got involved in a discussion here - https://github.com/Unidata/MetPy/discussions/3505 . I am not sure whether any of the variables in the formal arguments require a check_units decorator or not. Doing so complicates matters as they now will have to be passed as Pint Quantities and the logic inside the inversion method requires xarray DataArray objects.
I will get check in my test cases into the forked repository before cob this week.
Finally regarding the git issues I have not had any merge conflicts since the last time I rebased. In case there any you have shown me how to overcome those issues.
I was trying to finalize whether my functions require the usage of decorators or not. It is not entirely clear.
In the file kinematics.py all the other functions use these 2 decorators - preprocess_and_wrap and check_units and I got involved in a discussion here - #3505 . I am not sure whether any of the variables in the formal arguments require a check_units decorator or not.
It looks like that implies you need specific units for the calculation: you could require meters or [length]
for x or dx, but I don't think that's necessary, and might be tricky for optional arguments.
Doing so complicates matters as they now will have to be passed as Pint Quantities and the logic inside the inversion method requires xarray DataArray objects.
Could you explain in what way? The only thing that jumps out at me is the upsi.metpy.quantify()
at the end, so just skipping that if passed a pint.Quantity
might work
I was trying to finalize whether my functions require the usage of decorators or not. It is not entirely clear. In the file kinematics.py all the other functions use these 2 decorators - preprocess_and_wrap and check_units and I got involved in a discussion here - #3505 . I am not sure whether any of the variables in the formal arguments require a check_units decorator or not.
It looks like that implies you need specific units for the calculation: you could require meters or
[length]
for x or dx, but I don't think that's necessary, and might be tricky for optional arguments.Doing so complicates matters as they now will have to be passed as Pint Quantities and the logic inside the inversion method requires xarray DataArray objects.
Could you explain in what way? The only thing that jumps out at me is the
upsi.metpy.quantify()
at the end, so just skipping that if passed apint.Quantity
might work
In order to initialize upsi
and vpsi
I would need to pass xarray.DataArray
to this line of code
upsi = xa.zeros_like(umask)
vpsi = xa.zeros_like(vmask)
I cannot pass in pint. Quantity if I wanted to initialize upsi
and vpsi
Removing the call upsi.metpy.quantify()
would mean that you cannot use the reconstructed velocities to compute vorticity.
Doing so complicates matters as they now will have to be passed as Pint Quantities and the logic inside the inversion method requires xarray DataArray objects.
Could you explain in what way? The only thing that jumps out at me is the
upsi.metpy.quantify()
at the end, so just skipping that if passed apint.Quantity
might workIn order to initialize
umask
andvmask
I would need to passxarray.DataArray
to this line of codeupsi = xa.zeros_like(umask)
vpsi = xa.zeros_like(vmask)
That would do it.
I cannot pass in pint. Quantity if I wanted to initialize
upsi
andvpsi
Does the vorticity * y_difference / distance_squared * dx * dy
bit not give the right units if called with Quantity
s?
Removing the call
upsi.metpy.quantify()
would mean that you cannot use the reconstructed velocities to compute vorticity.
The vorticity function works fine for me, as long as the information .metpy.quantify()
uses is present:
>>> import metpy.calc as mpcalc
>>> import numpy as np
>>> import xarray as xr
>>> wind = xr.DataArray(
... np.ones((5, 5)),
... {"y": ("y", np.arange(5), {"units": "km"}), "x": ("x", np.arange(5), {"units": "km"})},
... attrs={"units": "m/s"}
... )
>>> mpcalc.vorticity(wind, wind)
<xarray.DataArray(x: 5, y: 5)> Size: 200B
<Quantity([[0, 0, 0, 0, 0]
[0, 0, 0, 0, 0]
[0, 0, 0, 0, 0]
[0, 0, 0, 0, 0]
[0, 0, 0, 0, 0]], 'meter / kilometer / second')>
Coordinates:
* x (x) int64 40B 0 1 2 3 4
* y (y) int64 40B 0 1 2 3 4
Just remembered an even simpler vorticity to use for testing: a constant vorticity field indicates solid-body rotation. The velocity field will be pointing in circles around some point, though which point depends on how you do the integration (center and corners of the domain are most common).
EDIT: I forgot constant shear would also work (i.e. u = k y). I suppose any quadratic form for a streamfunction would work: $\psi = A x^2 + B xy + C y^2 + D x + Ey + F$ would produce vorticity $\zeta = 2 A + 2 C$
Doing so complicates matters as they now will have to be passed as Pint Quantities and the logic inside the inversion method requires xarray DataArray objects.
Could you explain in what way? The only thing that jumps out at me is the
upsi.metpy.quantify()
at the end, so just skipping that if passed apint.Quantity
might workIn order to initialize
umask
andvmask
I would need to passxarray.DataArray
to this line of codeupsi = xa.zeros_like(umask)
vpsi = xa.zeros_like(vmask)
That would do it.
tt appears that both the inversion methods do not need any decorators. Which is fine and unless there are other reviewers who object let's keep it that way.
I cannot pass in pint. Quantity if I wanted to initialize
upsi
andvpsi
Does the
vorticity * y_difference / distance_squared * dx * dy
bit not give the right units if called withQuantity
s?
Yes that's correct.
Removing the call
upsi.metpy.quantify()
would mean that you cannot use the reconstructed velocities to compute vorticity.The vorticity function works fine for me, as long as the information
.metpy.quantify()
uses is present:
I will remove these calls - https://github.com/winash12/MetPy/blob/main/src/metpy/calc/kinematics.py#L1490
import metpy.calc as mpcalc import numpy as np import xarray as xr wind = xr.DataArray( ... np.ones((5, 5)), ... {"y": ("y", np.arange(5), {"units": "km"}), "x": ("x", np.arange(5), {"units": "km"})}, ... attrs={"units": "m/s"} ... ) mpcalc.vorticity(wind, wind) <xarray.DataArray(x: 5, y: 5)> Size: 200B <Quantity([[0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [0, 0, 0, 0, 0]], 'meter / kilometer / second')> Coordinates:
- x (x) int64 40B 0 1 2 3 4
- y (y) int64 40B 0 1 2 3 4
Just remembered an even simpler vorticity to use for testing: a constant vorticity field indicates solid-body rotation. The velocity field will be pointing in circles around some point, though which point depends on how you do the integration (center and corners of the domain are most common).
@DWesl I checked in 3 unit tests functions in the test_calc_tools.py file
I also pushed in a complete working example - https://github.com/winash12/MetPy/blob/main/examples/calculations/Vorticity_Divergence_Inversion.py and you can compare it with the notebook that started this issue - https://github.com/evans36/miscellany/blob/main/Vorticity%20and%20Divergence%20Inversion.ipynb. So my version is the vectorized version and takes about 6 seconds to run whereas the notebook on my machine takes about 30 minutes.
Regarding the test case for the rotational inversion - I have this test case here - https://github.com/winash12/vortdivinversion/blob/master/test_vort_div_inv.py
I am still getting a false on the numpy.allclose. Was the test case you wanted me to run - was that with real GFS data as I have shown above or was it with mock data ?
Sebastian Schemm's group also wanted to run our code with their test data - https://www.research-collection.ethz.ch/handle/20.500.11850/437938 although that was using the COSMO model Not a priority right now.
@DWesl I checked in 3 unit tests functions in the test_calc_tools.py file
I also pushed in a complete working example - https://github.com/winash12/MetPy/blob/main/examples/calculations/Vorticity_Divergence_Inversion.py and you can compare it with the notebook that started this issue - https://github.com/evans36/miscellany/blob/main/Vorticity%20and%20Divergence%20Inversion.ipynb. So my version is the vectorized version and takes about 6 seconds to run whereas the notebook on my machine takes about 30 minutes.
That is quite the speedup.
Regarding the test case for the rotational inversion - I have this test case here - https://github.com/winash12/vortdivinversion/blob/master/test_vort_div_inv.py
I am still getting a false on the numpy.allclose. Was the test case you wanted me to run - was that with real GFS data as I have shown above or was it with mock data ?
I mostly suggested using simple mock data, such as a sum of sinusoids or a constant. Given the finite-difference derivatives and discrete Green's Function integration, simple test functions are going to be very handy in working out the expected accuracy. I needed around sixteen grid points across a feature to get $\approx 10$% accuracy just for getting the vorticity from the wind field. I suspect the integration might be similar, though I'm not nearly as familiar with the accuracy of numerical quadrature methods.
EDIT: Actually 30 grid-points across the features, not 16.
For real data, I can guess at possible heuristics for accuracy, but probably plotting the original vorticity/divergence next to that derived from the inverted wind field is the best you're going to be able to do. That's probably more suited to an example than the test suite.
For the
sin(x) + cos(y/2)
test function I proposed earlier, evaluated on gridx, y = np.mgrid[0:15:0.25, 0:15:0.25]
, it looks like the finite-difference derivatives limit accuracy to about the grid spacing, so having the recovered vorticity matching the original vorticity to better than 10% accuracy requires Δx,Δy≤0.10. This is using the analytic velocity field, derived from the streamfunction-sin(x) - 4 cos(y/2)
.
In other words, using the grid
x, y = np.mgrid[0:15:0.10, 0:15:0.10]
might allownp.allclose(recovered_vorticity, original_vorticity, atol=0.10 * original_vorticity.max())
to return True. You might need a bit more fiddling with the grid spacing and expected accuracy for your field to get everything working, since you've also got finite-difference methods to recover the winds (this grid spacing works out to ≈100 grid points per period, if you use different functions).
@DWesl I am a bit confused by the test function you have proposed - sin(x) + cos(y/2)
, So with this test function how does one generate the velocities for the test case ?
In order to test my code the input variables I need are the zonal and meridional velocities. How will this test function give the u and v variables ?
My understanding of that test function is the following - evaluate the test function on that grid you have proposed.
Assign the values of the test function to the u and v velocities.
For the
sin(x) + cos(y/2)
test function I proposed earlier, evaluated on gridx, y = np.mgrid[0:15:0.25, 0:15:0.25]
, it looks like the finite-difference derivatives limit accuracy to about the grid spacing, so having the recovered vorticity matching the original vorticity to better than 10% accuracy requires Δx,Δy≤0.10. This is using the analytic velocity field, derived from the streamfunction-sin(x) - 4 cos(y/2)
. In other words, using the gridx, y = np.mgrid[0:15:0.10, 0:15:0.10]
might allownp.allclose(recovered_vorticity, original_vorticity, atol=0.10 * original_vorticity.max())
to return True. You might need a bit more fiddling with the grid spacing and expected accuracy for your field to get everything working, since you've also got finite-difference methods to recover the winds (this grid spacing works out to ≈100 grid points per period, if you use different functions).@DWesl I am a bit confused by the test function you have proposed -
sin(x) + cos(y/2)
, So with this test function how does one generate the velocities for the test case ?In order to test my code the input variables I need are the zonal and meridional velocities. How will this test function give the u and v variables ?
I also suggested checking that the recovered velocities gave the proper vorticity, since there are many velocity fields with the same vorticity. I think this is recovering nondivergent winds from vorticity (and irrotational winds from divergence), so you could also check the divergence of the recovered wind field to ensure that behaves as expected.
That is,
u, v = nondivergent_wind_from_vorticity(test_vorticity, dx=dx, dy=dy)
recovered_vorticity = vorticity(u, v, dx=dx, dy=dy)
tolerance = max(max(dx), max(dy))) * np.abs(test_vorticity).max()
np.allclose(recovered_vorticity, test_vorticity, atol=tolerance)
recovered_divergence = divergence(u, v, dx=dx, dy=dy)
np.allclose(recovered_divergence, 0, atol=tolerance)
If you really want to get a velocity field, you can view that vorticity as the Laplacian of a streamfunction. It is likely that the first term of the vorticity is from the first term of the Laplacian, and similarly for the second. This suggests a streamfunction of $$\psi=-\sin(x) - 4 \cos(y/2)$$ Taking the curl of $\mathbf{k} \psi$ gives $$u = -\frac{\partial\psi}{\partial y} = -2\sin(y/2) \qquad v = \frac{\partial\psi}{\partial x} = -\cos(x)$$
Given the method you use is based on the derivative of a Green's function solution of the Poisson equation, this should match the results from the functions decently well.
For a test to recover irrotational wind from divergence, the function above is interpreted as velocity potential, not divergence, so the wind field is $$u = -\cos(x) \qquad v = 2\sin(y/2)$$
I just realized my answers for velocity and streamfunction assume an infinite domain, which would require either excluding points too close to the edges when doing the comparison, or doing the integrals from the paper analytically for the finite domain actually provided: $$u_\psi = -\frac{1}{2\pi} \iint_\Omega \zeta(x^\prime, y^\prime) \frac{y - y^\prime}{(x - x^\prime)^2 + (y - y^\prime)^2} dx^\prime dy^\prime$$ For constant vorticity, we can write this as $\iint_{\Omega^\prime} \frac{1}{u^2 + a} du\ da$. I can do either of the integrals individually, but the combination is a pain, even for a rectangular domain.
Comparing the derived vorticity to the original might also go better if we just compare the domain interiors, but one grid cell achieves most of that benefit, and four grid cells almost all of it (test performed recovering a velocity field from a constant vorticity on a 60x80 domain with no mask (i.e. the bounding box includes the entire array). Comparing the divergence of that velocity field to zero saw smaller benefits from border sizes over three).
It also seems my test function is entirely too complicated to survive this kind of analysis, but you could try a point vortex instead:
test_vorticity = np.zeros((31, 31), dtype="f4")
test_vorticity[15, 15] = 1
u, v = nondivergent_wind_from_vorticity(test_vorticity, dx=1, dy=1)
Wind speed will be one over the distance to the center, direction will be perpendicular to that line. This method basically works by putting a bunch of these point vortices of varying strengths across the domain, so checking that it works with one is a solid test of the method.
If you want something more realistic, I have seen a Rankine vortex used to model vorticity in a hurricane:
R = 16
BASE_VORTICITY = units.Quantity(2e-3, "s^-1")
y, x = np.ogrid[-45:46, -45:46]
test_vorticity = ((x**2 + y ** 2) < R**2) * BASE_VORTICITY
which should produce wind speeds increasing linearly with radius out to sixteen grid boxes from the center, then decreasing as the inverse of the radius beyond sixteen grid boxes.
grid_spacing = units.Quantity(1, "km")
recovered_u, recovered_v = nondivergent_wind_from_vorticity(test_vorticity, dx=grid_spacing, dy=grid_spacing)
MAX_SPEED = np.pi * R**2 * BASE_VORTICITY / (2 * np.pi * R) * grid_spacing.to_base_units()
distances = np.sqrt(x**2 + y**2)
expected_u = np.where(test_vorticity != 0, -MAX_SPEED * y / R, -MAX_SPEED*R * y / distances**2)
expected_v = np.where(test_vorticity != 0, MAX_SPEED * x / R, MAX_SPEED * R * x / distances**2)
np.allclose(u, expected_u, atol=0.6) # roughly 5% accuracy
np.allclose(v, expected_v, atol=0.6)
I still prefer to check the vorticity and divergence directly:
recovered_vorticity = mpcalc.vorticity(u, v, dx=grid_spacing, dy=grid_spacing)
np.allclose(recovered_vorticity.to("s^-1"), test_vorticity, atol=7e-4) # roughly 1/3 accuracy
recovered_divergence = mpcalc.divergence(u, v, dx=grid_spacing, dy=grid_spacing)
np.allclose(recovered_divergence.to("s^-1"), 0, atol=5e-5)
If you put the hurricane vorticity model in an example in a test, it would likely be a good place to show off the benefits of restricting the integration to the eighth of the domain that actually has nonzero vorticity values, rather than the whole domain.
What you have described is exactly what Sebastian Schemm the lead author of that paper you referenced above told me when I mentioned that the unit tests were failing with GFS data. He said the only tests that matter are the ones in which the domain is taken into consideration i.e. the recovered vorticity should only be calculated within the finte domain and not outside. Although the original vorticity can be calculated outside the finite domain.
Got caught up in a couple of issues yesterday. will have the following tests ready by Monday
a) Your test function (let's just see how that holds up) b) constant vorticity test case c) plot original and recovered vorticity with GFS data.
@DWesl Apologies for the late response. Got caught up in few issues as usual but I have some good news for you -
https://github.com/winash12/vortdivinversion/blob/master/test_rot_inv.py
The test function test case turns out true.
Do you want me to run the other tests you proposed ?
The Rankine vortex and the point vortex? Sure, those are solid checks.
You probably only need to check one or the other, since the vorticity in each has finite support.
The Rankine vortex and the point vortex? Sure, those are solid checks.
You probably only need to check one or the other, since the vorticity in each has finite support.
@DWesl After a long time and doing some research I could finally come up with test code for a masked point vortex using xarray,.
https://github.com/winash12/vortdivinversion/blob/master/maskedpointvortex_xarray.py
But that test gives me a false. Could you please take a look at it and tell me where am I going wrong ? So the difficulty was getting the initial velocity field for the point vortex and coming up with a sufficiently decent mask for it. I believe I have done but it gives me a false.
I also have developed code for a source sink Rankin vortex. Once we get this test done we can take care of that one as well.
https://github.com/winash12/vortdivinversion/blob/master/test_rot_inv.py I modified this sligtly to take in radians instead of degrees and it still returns a true.
It looks like the example is big enough to need spherical geometry for the analysis. The vorticity code will do so, but I don't think the code from this PR does, since it's using the Green's function on the plane. I suspect if you change the coordinates from latitude and longitude to just x and y in kilometers it'll work fine.
You could adapt the Green's function for spherical geometry, but I'm not entirely sure what the result is. For spectral methods, there's pyspharm's getuv
.
It looks like the example is big enough to need spherical geometry for the analysis. The vorticity code will do so, but I don't think the code from this PR does, since it's using the Green's function on the plane. I suspect if you change the coordinates from latitude and longitude to just x and y in kilometers it'll work fine.
You could adapt the Green's function for spherical geometry, but I'm not entirely sure what the result is. For spectral methods, there's pyspharm's
getuv
.
So I spoke to Sebastian Schemm - the original author of the paper and this is what he had to say and I quote "Two ways to solve this:
- The paper says «the spherical grid of the employed limited-area model is mapped onto a flat Cartesian grid using a polar stereographic projection.». For that I use a standard routine.
- The Green’s function simply requires the distance between two grid points, in the paper called r, and the zonal distance x-x’ (or meridional), on the sphere. You compute those distances on the sphere using the great circle distance.
"
So dx' and dy' already are being calculated from MetPy using great circle distances. The only quantity not calculated on a spherical grid is the r^2 distance in the denominator which I can change in my code to calculate using great circle distances MetPy uses Proj if i am not mistaken. Any thoughts ? I will also proceed to try the cartesian point vortex test case as well.
So I spoke to Sebastian Schemm - the original author of the paper and this is what he had to say and I quote "Two ways to solve this:
- The paper says «the spherical grid of the employed limited-area model is mapped onto a flat Cartesian grid using a polar stereographic projection.». For that I use a standard routine.
For small enough areas, a map projection would work. Conformal projections (stereographic (polar or otherwise), Lambert Conformal, or Mercator (regular, transverse, or oblique)) will ensure the winds are in the right directions after mapping back to the sphere.
- The Green’s function simply requires the distance between two grid points, in the paper called r, and the zonal distance x-x’ (or meridional), on the sphere. You compute those distances on the sphere using the great circle distance.
The metric terms in derivatives on the sphere might cause problems, but that affects primarily the tangential distances, not the radial ones, so it might work out.