spatialdata
spatialdata copied to clipboard
An open and interoperable data framework for spatial omics data
You are right, the check on `c_coords` should be one the output data. I merged already because I need to merge #594, which was opened against the current PR. Please...
Hi, This is a visium_HD object from spatialdata documentation, there are three tables in this object: ``` SpatialData object with: ├── Images │ ├── 'Visium_HD_Mouse_Small_Intestine_cytassist_image': SpatialImage[cyx] (3, 3000, 3200) │...
Routinely listing improvement we can make to the docs: - [x] add `rasterize_bins()` example in Visium HD notebook; - [ ] show how to use `get_element_instances()` in the annotation notebook.
`shapely.make_valid` should be run on every shape after vectorization. It will fix the "polygons with holes" See example [here](https://github.com/saeyslab/VIB_Hackathon_June_2024/blob/main/polygons/polygons_test.ipynb)
[`_check_valid_name`](https://github.com/scverse/spatialdata/blame/95d69ff7138305ee2a530bcc45f8460a06e8118b/src/spatialdata/_core/_elements.py#L34-L40) implemented stricter naming constraints in https://github.com/scverse/spatialdata/commit/137e1e06c946800599d55c45f18fe8a6a1fb06eb. We already have existing SpatialData datasets where `.` is used as separator for naming components with different meanings, like `Slide1.A2.0.pre_maldi`. When loading them...
I think it makes more sense grammar-wise to if we call them CoordinateSystem.axis_names` instead of `CoordinateSystem.axes_names`.
Typos can lead to parameters being not specified inside parsers and their default value (`None`) can lead to nasty bugs. I would add checks that the argument passed to `kwargs`...
### Motivation The `region` metadata field was introduced at the storage format level to be able to tell which elements a table is annotating, without the need to examine the...
Compression was (and it is still disabled) because of this issue: https://github.com/ome/ome-zarr-py/issues/219 The new `ome-zarr-py` version should have this addressed. Check this and re-enable the compression again.
Hi, I have multiple datasets for training. When I build the dataloader, it seems that the multiprocessing doesn't work. Here is the code: ```python import torch.multiprocessing as mp mp.set_start_method("spawn", force=True)...