CaImAn
CaImAn copied to clipboard
Bad initialization, controlling component size/shape
Hi everyone
Specifically @j-friedrich, @kushalkolar mentioned you may be able to give some insights into my problems here.
I am processing 2-photon calcium imaging recordings, but my segmentation results don't look good. I have results from the CaImAn-MATLAB pipeline which are acceptable, but even trying to match parameters, my python results are unacceptable. I'm not asking how to get the python version working the same as matlab, but I would like some help tuning the initialized neurons, specifically in terms of component shape/size.
Raw data
2um/px @ 17hz [5100x448x448] (Txy)
This is a recording where 2 ROI's are joined, you'll notice a seam artifact down the center of the image. The size of this recording is
Mean Image and Movie:
https://github.com/user-attachments/assets/4f8ee820-8db5-49f3-97ae-e4f6108e686c
Parameters
CaImAn parameters (via mesmerize-core):
'K': 20, 'bas_nonneg': True, 'border_nan': 'copy', 'decay_time': 0.4, 'dxy': [2.0, 2.0], 'fr': 17.06802148340725, 'gSig': 7.0, 'gSig_filt': (3, 3), 'gSiz': (17.0, 17.0), 'is3D': False, 'max_deviation_rigid': 3, 'max_shifts': [5, 5], 'merge_thr': 0.7, 'method_init': 'greedy_roi', 'min_SNR': 1.4, 'min_mov': None, 'nb': 3, 'niter_rig': 1, 'num_frames_split': 50, 'num_splits_to_process_rig': None, 'overlaps': [4, 4], 'p': 2, 'pw_rigid': True, 'rf': 18, 'rolling_sum': True, 'rval_thr': 0.8, 'splits_els': 14, 'splits_rig': 14, 'ssub': 1, 'stride': 4, 'strides': [32, 32], 'tsub': 1, 'upsample_factor_grid': 4, 'use_cnn': False, 'use_cuda': False}, 'refit': True}
Results
The problem
There are a few issues with the resulting initialized / refit components above. Many of them I can filter out by eliminating all neurons that are in the seams where 2 roi's were joined. My main issue is my inability to effectively filter neurons by size,
I'm mimicking some work done previously in CaImAn-MATLAB where the function classify_components.m is used to filter away neurons based on their minimum / maximum size:
function [rval_space,rval_time,max_pr,sizeA,keep] = classify_components(Y,A,C,b,f,YrA,options)
% perform several tests for component classification:
% i) correlation test using the function classify_comp_corr
% ii) calculate the probability that max of each temporal component is
% due to noise through trace_fit_extreme.m
% iii) filter component based on min/max size
I'm having trouble finding similar functionality in the python classify_components_ep.
Here's what the MATLAB results look like (left=accepted, right=rejected):
I effectively would like only neurons smaller than X but bigger than Y. I can't seem to even slightly match results between pipelines and no combination of patch-size/K values are helping.
Any insights would be greatly appreciated!