scanpy
scanpy copied to clipboard
Suggestion to support VisiumHD tissue_position_list files (using parquet files)
What kind of feature would you like to request?
Additional function parameters / changed functionality / changed defaults?
Please describe your wishes
Hey
VisiumHD will now be the main data type for 10X technology on spatial data, and due to the higher number of barcodes used, we can't use a normal .csv file anymore due to row limitations. Instead, they're now using .parquet files, but on scanpy when using the read_visium
method, there doesn't seem to have support for that.
Lines of code:
if load_images:
files = dict(
tissue_positions_file=path / 'spatial/tissue_positions_list.csv',
scalefactors_json_file=path / 'spatial/scalefactors_json.json',
hires_image=path / 'spatial/tissue_hires_image.png',
lowres_image=path / 'spatial/tissue_lowres_image.png',
)
I would suggest something like this to check if there's a csv file or parquet file:
if load_images:
files = dict(
tissue_positions_file = next((path / f'spatial/tissue_positions_list{suffix}' for suffix in ['.csv', '.parquet'] if (path / f'spatial/tissue_positions_list{suffix}').exists()), None),
scalefactors_json_file=path / 'spatial/scalefactors_json.json',
hires_image=path / 'spatial/tissue_hires_image.png',
lowres_image=path / 'spatial/tissue_lowres_image.png',
)
if files['tissue_positions_file'].suffix == '.csv':
positions = pd.read_csv(files['tissue_positions_file'], header=None)
elif files['tissue_positions_file'].suffix == '.parquet':
positions = pd.read_parquet(files['tissue_positions_file'])
This way we can use read_visium
if the tissue location file is .parquet instead:
AnnData object with n_obs × n_vars = 605471 × 18085
obs: 'in_tissue', 'array_row', 'array_col'
var: 'gene_ids', 'feature_types', 'genome'
uns: 'spatial'
obsm: 'spatial'
Using scanpy 1.9.6
Hello, I'd like to ask which version of scanpy you're using? The code you provided doesn't match the latest version, which is causing errors when I try to use it.
Hi, I'm using 1.9.6
Thank you very much ,I've resolved the error.
Going to implemented in future releases