Performance of interp_like when converting dfsu to dfs2 for large files
Is your feature request related to a problem? Please describe. Converting large dfsu files to dfs2 seems to take a long time (for a 120mb item, it took roughly 6 hours). I used the following code:
ds = mikeio.read(dfsu_filename, items=item, time=time)
da = ds[item]
g = da.geometry.get_overset_grid(dx=dxdy)
da_grid = da.interp_like(g)
The PC has 64GB memory and it's CPU is: Intel Xeon E5-2650 v4 @ 2.20Ghz.
Describe the solution you'd like Is there anything I could do to make this run faster?
Describe alternatives you've considered I haven't tried anything yet. One idea is to vectorize the for loop in "_interp_itemstep".
Additional context Add any other context or screenshots about the feature request here.
😟6h!
More details needed:
- Number of timesteps
- Number of elements in the dfsu file
- Output grid resolution (
dxdy)
@jsmariegaard Do we really need a loop here? https://github.com/DHI/mikeio/blob/c06b4154134c2bb9070f8f1b53a82b7fc303aaf2/mikeio/interpolation.py#L124-L125
Haha, yea it's a long time to match a big file, but hopefully it could be improved nonetheless :) .
- Number of timesteps 1 timestep (it's a statistics dfsu file of model results)
- Number of elements in the dfsu file 10 871 367
- Output grid resolution (
dxdy) 1m x 1m
Bummer. It would be very helpful if you could profile it e.g. with cProfile and snakeviz. But that would of course require a smaller file or that you only consider a subset using the area argument...
> pip install snakeviz
> python -m cProfile -o report.prof myscript.py
> snakeviz report.prof
But you're probably right about the element-loop. It doesn't look healthy.
You could also try with
da_grid = da.interp_like(g, n_nearest=1)
which use nearest point instead of IDW interpolation.
Here's the profile for testing it on a much smaller dfsu (report attached as zip). I'll try a test on a larger file overnight to see how scale might change that.
report.zip
Hmm, yes, the contains() method (to find out if points are inside the mesh) is actually what takes the most time in this case. But closely followed by the dot product. It may be better to set extrapolate=True and then remove points outside the domain afterwards
> da_grid = da.interp_like(g, extrapolation=True) # will not call contains()
The results from the larger dfsu file are in (no. elements = 3 760 848). It looks like the contains() method is the main culprit. The block taking ~15% is the cKDTree, not _interp_itemstep (which is 3% here).
report.zip
Did you try
da_grid = da.interp_like(g, extrapolation=True)
Did you try
da_grid = da.interp_like(g, extrapolation=True)
This definitely helped a lot, thanks! The final result still needs masking though, I'll try it with the radius next to see if I could avoid that.
FYI, this is the same example as above but with extrapolation as true (5x faster):

I have a similar size dfsu file and have the same issue, but I get an error when I try to set 'extrapolation=True':
Any idea how to solve this?
.. and can it really be true that this process takes that long in Python when in Mike Zero it is quite fast?
@cethceth, which version of MIKE IO are you using?
Solved it with 'extrapolate' instead of 'extrapolation' :)
.. and can it really be true that this process takes that long in Python when in Mike Zero it is quite fast?
Are you referring to Create new Grid series - From Dfsu file (with the less than intuitive grid specification)?

Yes. But it works fine with extrapolate now. Any idea of how to mask the output grid (fast)? I would prefer not having to load shapefiles.
@cethceth maybe you can tell us a bit more about your use-case., not sure we can do anything about it right now, but it is useful to know. I suppose converting the dfsu to dfs2 is simply an intermediate step?
@ecomodeller we want an automatic tool for converting dfsu files to tiff files which we use for reporting, calculation of damage cost etc.. We have that now. But since the output tiff files are not masked this has to be done as a post processing step in GIS. I was hoping to gather all the processing in python.. and also avoid having to load shapefiles in python for masking the output. If there was a way to get the extent of the dfsu file which could then be used when masking the dfs2/tiff file in python that would be really nice :)