31 — A Deep Structured Learning Approach Towards Automating Connectome Reconstruction from 3D Electron Micrographs

Funke et al (1709.02974)

Read on 21 September 2017
#segmentation  #EM  #neuroscience  #computer-vision  #deep-learning  #connectome  #group:janelia 

This is a really exciting paper because it’s ① really exciting, and ② friends!

This paper demonstrates a new deep-learning algorithm for the segmentation of electron-microscopy neuroscience data. Its results dramatically improve upon current state-of-the-art for many existing datasets.

The algorithm works by predicting inter-voxel affinities between neighboring volumes of voxels. This is done by a 3D extension of (usually-2D) U-net architecture. This results in an oversegmented result. The authors also contribute a novel agglomeration algorithm that combines these oversegmented features, and better maintains the topology of the resultant segmentation.

This new method requires “[almost] no tuning to operate on datasets of different characteristics,” meaning that it can be trained on one type of data and run on another. The only difference is modification to the “locality” of the U-net receptive field.

Though this segmentation algorithm dramatically outperforms the competition, it is currently not easily parallelized because the agglomeration requires image context — though I could imagine this is somewhat easily solved by running overlapping blocks with enough “seam” to facilitate merges. The algorithm’s runtime increases linearly with data-size, which means that it is feasible to run larger blocks on higher throughput compute resources and merge with smaller blocks (seams) run on less powerful resources.

This is really exciting news for connectomics, and I’m looking forward to getting my hands on their code as soon as it’s available to start to play around.