227 — AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks

Zaimi & Wabartha et al (1711.01004)

Read on 04 April 2018
#neuroscience  #electron-microscopy  #TEM  #SEM  #spinal-cord  #splenium  #corpus-callosum  #U-net  #macaque  #rat  #imagery  #deep-learning  #connectome  #neural-network  #myelin  #segmentation 

As I’ve discussed before, segmentation is a serious challenge when converting electron microscopy imagery into connectome. This research looks at the challenge of segmenting large axons and myelin in zoomed out (micrometer-scale) imagery, and applies a deep neural net to this problem in order to better understand concepts such as neuron density, directionality of myelinated neural processes, and more.

The net is based upon the U-net architecture made famous by Ronneberger et al, but uses the classic 2D architecture rather than a 3D adaptation that some researchers have used recently. This probably means that the segmentataion is less consistent between z-slices, but because myelin and large processes are so easily segmented, it is likely that this is a nonissue.

This technique was applied to both SEM and TEM, and performs well on both brain tissue as well as spinal cord. I imagine this work could be very helpful for guiding further analysis of subsets of larger images: Whereas this method can run on lower-resolution scans covering the entire mouse brain, obviously that’s not currently possible with nanometer-scale scans. But a large dataset, subsampled and segmenting using this approach, could be enriched with this segmentation (especially since many state-of-the-art segmentation methods perform poorly around larger structures and myelin).

The AxonDeepSeg code is available on GitHub.