pygmtsar icon indicating copy to clipboard operation
pygmtsar copied to clipboard

PyGMTSAR (Python InSAR): Powerful and Accessible Satellite Interferometry

Results 56 pygmtsar issues
Sort by recently updated
recently updated
newest added

Hi, I am working on 4 S1 images in 1 sub-swath. However, in the output result of the estimated scene locations, I can see several (10) lines. Can you please...

Some orbits are not found, must be (date, date+2days)

Regression method is causing border artifacts: ![image](https://github.com/AlexeyPechnikov/pygmtsar/assets/50405969/4daf61e7-8050-48f5-96c3-7fe580be1ed1) ![image](https://github.com/AlexeyPechnikov/pygmtsar/assets/50405969/c116d731-8a99-431d-95d1-f5e9f73fdaed) Do you recommend any workaround except for increasing chunksize (it is currently 2048)

I am having issues with sbas.ps_parallel, some indications on workflow would be helpful. Is ps_parallel supposed to be run after intf_parallel and merge_parallel? Doing so causes {final date}_F{merged subswaths}.PRM missing...

unwrap_sbas = sbas.sync_cube(unwrap_sbas, 'unwrap_sbas') I got stuck at this step, it is taking huge time in colab pro. Please give a solution With 16 scenes only

https://github.com/mobigroup/gmtsar/blob/b23580b9b6138752cc88bc5a63837c5197b1f036/pygmtsar/pygmtsar/Stack_unwrap_snaphu.py#L79 I can't seem to find any reference to interpolate_nearest in the code. Just filling nan with 0.

![1](https://github.com/mobigroup/gmtsar/assets/88897405/9fc1308e-be06-43a5-baab-f84b6f83e018) ![2](https://github.com/mobigroup/gmtsar/assets/88897405/25f46e01-00c7-472b-bec6-388c8b82b65a)

Is Gaussian detrending being performed for pixels across the entire time stack? I'm running out of memory and have tried reducing chunksize to 128 with the same problem. My system...

I'm trying to create interferograms (sbas.intf_parallel(pairs, wavelength=60, func=intf_decimator) but I don't know why it's giving me the following error: ``` """ Traceback (most recent call last): File "/home/ubuntu/.local/lib/python3.8/site-packages/xarray/backends/file_manager.py", line 209,...

This is much faster than looping in Python. The current method can take very long (30+ min) for large stacks (4000 pairs), have you experienced this? Benchmark (DASK 1 Worker...