VisCy icon indicating copy to clipboard operation
VisCy copied to clipboard

WIP: refactor contrastive learning code with virtual staining code

Open mattersoflight opened this issue 1 year ago • 6 comments

This issue tracks our progress toward integration of the contrastive learning code with virtual staining code.

Our preprocessing code is currently in good shape and consists of:

  • fluorescence deskewing/deconvolution: shrimPy
  • phase deconvolution: recOrder
  • Registration with fluorescence: shrimPy
  • virtual staining of nuclei and membrane: VisCy
  • Segmentation: VisCy (#108)
  • Tracking: ultrack

We are still improving the tracking to capture cell division and cells near the boundary of the FOV: @tayllatheodoro

Training It works well via pytorch lightning CLI and configs. The dataloader also works well with the HCS data format.

Pending improvements: Architecture :

  • [ ] #138 @ziw-liu
  • [ ] #120 @ziw-liu
  • [ ] #139 @ziw-liu

Data loader and loss functions:

  • [ ] #136
  • [ ] #123
  • [ ] data loader that pools multiple datasets.

Prediction and evaluation It works well via pytorch lightning CLI and configs.

  • [ ] Analyze embeddings with PCA and UMAP to generate a standard report: #140 . @Soorya19Pradeep
  • [ ] Refine the napari visualization tool for annotation of cell states in latent space: https://github.com/czbiohub-sf/napari-iohub/pull/13. @ziw-liu

In this round, we should make any changes in the code path that can affect the architecture.

mattersoflight avatar Jul 17 '24 23:07 mattersoflight

~Currently (Slurm) Cellpose segmentation is implemented in shrimPy, and will be moved to the new bioimage analysis repo.~

@Soorya19Pradeep I'm outdated! It's now in https://github.com/mehta-lab/VisCy/pull/108

ziw-liu avatar Jul 18 '24 17:07 ziw-liu

For the napari UI I think we should first try interacting with the plugin through standardized data files so we don't have to maintain our own interface.

ziw-liu avatar Jul 18 '24 17:07 ziw-liu

The napari-clusters-plotter plugin does not implement readers. So it relies on what's available in the napari layer list (features are stored as attribute of the labels layer).

I now think a workable way is to implement a custom reader in napari-iohub for the images and tracks so the visualization is easier (handle mixed dimensions and scales etc). The ultrack plugin does load the extra columns in its output CSVs as layer features so it can be used by the cluster plotter.

As for clustering, I think dimensionality reduction should be done beforehand on all the cells, instead of on the limited number of cells in each FOV.

ziw-liu avatar Jul 18 '24 23:07 ziw-liu

The ultrack plugin does load the extra columns in its output CSVs as layer features so it can be used by the cluster plotter.

That's interesting. Can this work?

  • Write projected embeddings to the same table as the output of ultrack
  • load the tracks using ultrack plugin
  • use napari-cluster-plotter to visualize labels that match projected embeddings. If yes, we don't need to create a new widget.

@ziw-liu please go ahead and decide on a useful and low-maintenance solution.

mattersoflight avatar Jul 19 '24 02:07 mattersoflight

@ziw-liu @alishbaimran Given our offline discussion, here is the prioritization of features:

  1. update data format and data module for efficiency + allow selection of channels to encode,
  2. define positive pairs based on temporal closeness, and
  3. pool different datasets.

You could partition the refactor in 3 PRs, each of which implements the above and is tested with the corresponding training run.

We will train contrastive phenotyping models via the python scripts and CLIs that wrap these scripts. We don't have to prioritize integration with lighting CLI yet.

mattersoflight avatar Jul 23 '24 23:07 mattersoflight

@ziw-liu and @alishbaimran I think we can bypass the patchification step by chunking the Zarr store in C*Y*X sized chunks. The data chunked like this can be loaded fast enough on VAST since we just have to fetch the images within specific Z and T ranges. This will also simplify the processing pipeline and avoid the need to track one more data format.

I got this idea while exploring the data we are preparing for release with the paper on mantis. Take a look at: /hpc/projects/comp.micro/mantis/mantis_paper_data_release/figure_1.zarr

mattersoflight avatar Jul 24 '24 16:07 mattersoflight

@mattersoflight I consider the refactoring completed after #153. Feel free to open new issues for specific tasks.

ziw-liu avatar Oct 18 '24 00:10 ziw-liu