Shadi
Shadi
This would be a great feature. I think the rationale for the `float32` implementation is that it supports handling for missing values with `np.nan`. Maybe one workaround would be to...
Hey, It's been a while, but I think the current version should work with both Cython 0.29 and Cython 3.0. As I mentioned in the discussion, the main fix was...
I think the relevant function from Dask is here: https://github.com/dask/dask/blob/a988716cfeb3a9b1015d14a334368e70ae382553/dask/array/core.py#L2709 I believe it depends on a configurable limit on the size of the chunks `config.get("array.chunk-size")`, which can be easily incorporated...
Another option that seems to work well in some settings is to use the shrinkage algorithm of Higham et al. (2014). Here's a rough implementation of the idea: ```python from...
Hi Zhonghe, The `xarray` backend is not memory-efficient, so I don't recommend using it to compute very large LD matrices. Regarding the error with `plink1.9`, please upgrade to `magenpy>0.1`. This...
Are you running this on a shared compute cluster? I suspect that the plink process got killed due to lack of resources (e.g. out of memory error, storage space, etc.)....
Better error handling and catching errors from plink's side should now be part of `v0.1.4`. Please let me know if you're still having issues.
Thanks for catching this bug Muhammad! I indeed fixed this issue in `magenpy` recently, but haven't pushed the latest changes to GitHub/PyPI yet. Will do so soon.
This should now be fixed as part of `magenpy==0.1.4`.
Which versions of `magenpy` and `viprs` are you using in this example? Also, are those your own LD matrices or the ones that we published on Zenodo? The published LD...