VisCy icon indicating copy to clipboard operation
VisCy copied to clipboard

architectures and nomenculature

Open mattersoflight opened this issue 2 years ago • 5 comments

Now that 2D and 2.5D UNets from our 2020 paper are implemented in pytorch, we are exploring the space of architectures in two ways: a) input-output tensor dimensions. b) Using SOTA convolutional layers, particularly inspired by ConvNeXt.

At this point, the 2.1D network combines both. It is useful to have distinct nomenclature and models to compare these two innovations.

I suggest:

  • 2D, 2.5D, 2.1D, 3D architectures use classical convolutional layers and activations.
  • Architectures that use ConvNeXt design principles can use 2NeX, 2.5NeX, 2.1Nex, ... nomenclature.

mattersoflight avatar Aug 30 '23 12:08 mattersoflight

On this note, since we are having many more models, the old initialization scheme based on dictionaries is starting to be restricting. I have been thinking about using dependency injection with lightning configs to initialize (and type check!) each underlying model component.

ziw-liu avatar Aug 30 '23 16:08 ziw-liu

We also potentially need better names to distinguish between:

  • 3D stem -> 2D encoder -> 2D decoder -> 2D head (currently called 2.1D)
  • 3D stem -> 2D encoder -> 2D decoder -> 3D head (currently called 2.2D)

ziw-liu avatar Aug 30 '23 16:08 ziw-liu

@ziw-liu @edyoshikun let's use the following.

2.5D UNet: 3D input -> 3D encoder -> 2D decoder -> 2D output 3D UNet: 3D input -> 3D encoder -> 3D decoder -> 3D output 2.5D LUNet: 3D input -> 2D encoder -> 2D decoder -> 2D output 3D LUNet: 3D input -> 2D encoder -> 2D decoder -> 3D output 2.5D UNeXt: 2.5D, but with ConvNeXt layers 3D UNeXt: 3D, but with ConvNeXt layers

mattersoflight avatar Aug 30 '23 21:08 mattersoflight

@ziw-liu

I have been thinking about using dependency injection with lightning configs to initialize (and type check!) each underlying model component.

The model design and the config file both become modular with dependency injection! Thanks for pointing out this pattern.

Please think through how sensible defaults for modules can be set, such that succinct calls to construct models (example below) still work.

# %% 2.5D UNet
model = VSUNet(
    architecture="2.5D",
    model_config={
        "in_channels": 1,
        "out_channels": 3,
        "in_stack_depth": 9,
    },
)

If your thought experiment is successful, let's start using this pattern to write new models (3D LUNet) and to refactor recent models (2.5D UNeXt, 3D UNeXt).

mattersoflight avatar Aug 31 '23 13:08 mattersoflight

2.5D UNeXt: 2.5D, but with ConvNeXt layers 3D UNeXt: 3D, but with ConvNeXt layers

Can we call these nD LUNeXt for consistency?

ziw-liu avatar Aug 31 '23 18:08 ziw-liu

#84

ziw-liu avatar Jun 28 '24 16:06 ziw-liu