12 — DeepEM3D: approaching human-level performance on 3D anisotropic EM image segmentation

Zeng et al (10.1093/bioinformatics/btx188)

Read on 02 September 2017
#electron-microscopy  #EM  #connectomics  #neuroscience  #computer-vision  #deep-learning  #image-segmentation 

Image segmentation — grouping pixels by what they semantically represent in a scene — is a really big challenge in the world of neuroscience: Microscopy imagery is generally very dense, and very complex — two factors that make conventional computer-vision algorithms poor choices in computational neuroscience.

This paper breaks the challenge into two components: First, it computes a neurite boundary map; and second, it generates a segmentation that, the authors claim, competes with state-of-the-art (though they only demonstrate its performance on a relatively small dataset).

If this work scales well — that is, it can be parallelized across massive datasets — then there is the possibility of this leading a paradigm-shift in EM neuroscience. But I don’t see much evidence of seam-stitching in their code, which is in MATLAB…which leads me to believe that this is optimized for a small-scope task, like the SNEMI3D dataset on which it was tested for this paper.

I realize I’m wearing my big-data-elitist hat right now because I work so much with massive EM datasets, so let me take a step back and mention that in most neuroimagery studies, this algorithm may be exceptionally useful if the developers can come up with a way for it to run without a full MATLAB runtime.

And, to be very clear — I am very excited about this approach, which regards neurite boundaries as semantically important (whereas many existing algos treat all pixels equally, and dump everything into a massive net).