skan
skan copied to clipboard
Median and maximum width
Hi @jni,
thanks for building Skan, it works very well. I have a question, is it possible to extract the Median and maximum width from the medial axis of the skeleton?
Thanks in advance
Hello @snowformatics and thanks for the kind words!
Do you mean the width from the original, pre-skeletonisation binary image?
For the maximum, it's definitely doable, you can use a NumPy ufunc reduceat
to compute the maximum value on each branch. You can use np.maximum.reduceat
instead of np.add.reduceat
in similar code to path_means
, code here. But how to determine the width? You can use scipy.ndimage.distance_transform_edt
to create a distance map from the binary image edges, and then multiply that by the binary skeleton, and use that as the input skeleton_image
to skan.Skeleton
. Then you can compute mean width and, as mentioned, maximum width.
For the median, it'll be a bit trickier. You can do it reasonably easily in pure Python by getting the path data from the same skeleton as above. Use [np.median(skeleton.path_with_data(i)[1]) for i in range(skeleton.n_paths)]
. (Docs for path_with_data
)
All of these things will return the values in the same order as summarize
, so you can easily add them as columns to your final output table.
If you would contribute these things as documentation examples or new methods on the Skeleton object, that would be amazing! But in the meantime I hope they get you unstuck!
Hi Juan,
Thanks a lot for your answer which is very helpful to me. In case I implement one of the features, I would love to contribute to Skan. I will contact you again, Cheers Stefanie
Am Mo., 19. Okt. 2020 um 10:30 Uhr schrieb Juan Nunez-Iglesias < [email protected]>:
Hello @snowformatics https://github.com/snowformatics and thanks for the kind words!
Do you mean the width from the original, pre-skeletonisation binary image?
For the maximum, it's definitely doable, you can use a NumPy ufunc reduceat to compute the maximum value on each branch. You can use np.maximum.reduceat instead of np.add.reduceat in similar code to path_means, code here https://jni.github.io/skan/_modules/skan/csr.html#Skeleton.path_means. But how to determine the width? You can use scipy.ndimage.distance_transform_edt https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.distance_transform_edt.html to create a distance map from the binary image edges, and then multiply that by the binary skeleton, and use that as the input skeleton_image to skan.Skeleton. Then you can compute mean width and, as mentioned, maximum width.
For the median, it'll be a bit trickier. You can do it reasonably easily in pure Python by getting the path data from the same skeleton as above. Use [np.median(skeleton.path_with_data(i)[1]) for i in range(skeleton.n_paths)].
All of these things will return the values in the same order as summarize, so you can easily add them as columns to your final output table.
If you would contribute these things as documentation examples or new methods on the Skeleton object, that would be amazing! But in the meantime I hope they get you unstuck!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jni/skan/issues/96#issuecomment-711840288, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHTXXTQX322IKCF2PBVCPLSLP2I5ANCNFSM4SVYVBUQ .
@jni I saw the "big image sprint at SciPy" call today. Would this be an option to contribute during the sprint? But I have to admit that it would be my first contribution and I can not promise 100% to join due to my little son (often ill).
Hi @snowformatics! Absolutely this would be a great sprint contribution! However, you're in Europe, aren't you? I think for the live sprint the hours for SciPy Japan (9am-5pm Tokyo time) will be challenging! You can of course do a "relay" in which you overlap as much as you can, and I will do my best to support you! But in any case this is open source! No one expects a specific time commitment from you! 😊 Just come for as much or as little as you like/can, and we'll do our best to help make it a good and fun contribution!
@jni Sounds great, timing will be challenging but I could be available from 3-5pm Tokyo time (7-9am in Germany) on both days. Let's see!