panoptica icon indicating copy to clipboard operation
panoptica copied to clipboard

[QUESTION] Comparison to CC-Metrics

Open ogencoglu opened this issue 10 months ago • 10 comments

Is there any docs regarding the differences, similarities, feature comparison to CC-Metrics?

https://github.com/alexanderjaus/CC-Metrics

ogencoglu avatar Jun 04 '25 07:06 ogencoglu

Hey Thanks for the great question. So I have not perfectly looked over their code, but they do not support multi-class segmentation, but they support tensors (and we only support cpu-based calculations, at least for now). The biggest difference is they can only compute region-wise (cc) metrics, while we can compute global and instance-wise metrics. We cannot compute region-wise (cc) metrics yet, but we are actually working on this and should be done within the next month or so. For a more detailed answer, I would need to dig into their code more.

Hendrik-code avatar Jun 04 '25 19:06 Hendrik-code

Thanks for the swift overview. Sounds complementary in some sense and possibly an opportunity for collaboration (at least inspiration). I asked the same in their repo so let's see how they see it.

ogencoglu avatar Jun 04 '25 20:06 ogencoglu

they support tensors

I think this is quite important. One can use these metrics as checkpointing and early stopping (or possibly even direct optimization e.g. autograd tools) without having to leave the "tensor world".

ogencoglu avatar Jun 04 '25 20:06 ogencoglu

We also support hierarchical segmentation problems (such as BraTS) and are currently implementing part-wise panoptic segmentation. Another difference is probably panoptica's modular approach: You can easily configure different combinations of instance approximators, matchers, and evaluators depending on your preferences and use case.

neuronflow avatar Jun 04 '25 21:06 neuronflow

they support tensors

I think this is quite important. One can use these metrics as checkpointing and early stopping (or possibly even direct optimization e.g. autograd tools) without having to leave the "tensor world".

you mind coming up with a plan to incorporate this nicely? Then we can do a PR for this.

Hendrik-code avatar Jun 11 '25 07:06 Hendrik-code

Panoptic Quality is inherently non differentiable, how would you go about doing this?

aymuos15 avatar Jun 11 '25 08:06 aymuos15

I believe one needs to distinguish between a metric and a loss here. We could compute all metrics by casting tensors to np arrays and keep most of the code-base intact (without exploiting the benefits of tensor computing).

Direct optimization is another matter, though and depends as @aymuos15 on the differentiability of the metrics. Many of the distance-based metrics we employ are non-differentiable.

neuronflow avatar Jun 11 '25 09:06 neuronflow

In the context of the repo (which is for metrics and not losses), I (after discussing with Tom) think the bigger benefit would be on porting everything to cupy instead of tensors (pytorch). That should give the benefits of exploting gpu usage with minimal change.

Then again, I am not really aware about the performance differences between general computations between cupy and torch implementations. I imagine it to be minimal pertaining to this repo and definitely easier (and less time consuming) to give users the option of cupy.

Regarding direct optimisation, my assumption is that this repo is not the one to look into that?

aymuos15 avatar Jun 11 '25 09:06 aymuos15

Regarding direct optimisation, my assumption is that this repo is not the one to look into that?

I believe we are open to exploring. As long as it is not in conflict with our core mission I don't see why not.

I (after discussing with Tom) think the bigger benefit would be on porting everything to cupy instead of tensors (pytorch).

Do we actually have use cases where our (from my naive pov already quick) computation times become a concern/roadblock?

So far, the aim was not to produce fast code, but code that is not slow, which means we also prioritize readability, maintainability, and extendability.

neuronflow avatar Jun 11 '25 09:06 neuronflow

I believe we are open to exploring. As long as it is not in conflict with our core mission I don't see why not.

Right. I guess that opens a totally different conversation then.

Do we actually have use cases where our (from my naive pov already quick) computation times become a concern/roadblock?

Computing the components even with cc3d is the main computational bottleneck. Just running that on the gpu is much better. Ref: https://github.com/aymuos15/GPU-Connected-Components/blob/master/connected_components_comparison_line_graph.png

So far, the aim was not to produce fast code, but code that is not slow, which means we also prioritize readability, maintainability, and extendability.

I am definitely in favour of this. However adding the option of cupy with minimal change in my opinion would not affect this. I am not sure how minimal this change will be and may be wrong. I can try looking into it on a later date.

aymuos15 avatar Jun 11 '25 09:06 aymuos15