103 — High-Precision Automated Reconstruction of Neurons with Flood-filling Networks

Januszewski et al (10.1101/200675)

Read on 01 December 2017
#segmentation  #EM  #neuroscience  #neural-network  #CNN  #FFN  #connectome 

In order to generate reliable connectomes from a large volume of electron microscopy data, it is important to have robust image segmentation algorithms that can determine the bounds and shapes of each neuron and synapse in the volume. Many segmentation algorithms exist to do this, but they are commonly error-prone; even small issues by the segmentation algorithm can lead to catastrophic failures in the resultant connectome graph.

In this paper, Google researchers demonstrate their recurrent neural network implementation, and show a dramatic improvement over state-of-the-art systems. This CNN approach, dubbed “flood-filling networks,” can span 1.1mm path-lengths of error-free neurites, which is an order of magnitude better than previously explored systems.

Flood-filling networks, or FFNs, take two inputs: The image data itself, and the “predicted object map,” or POM, which encodes a [0..1] confidence mapping for each voxel that represents the model’s confidence that the voxel belongs to the currently-being-segmented object.

First, “seeds” are placed at arbitrary locations far from detected cell boundaries. For each seed point, the POM is set to 0.95, and all other voxels are set to 0.5. (These values are used instead of the binary 0 and 1 to prevent overfitting.)

In each iteration of the FFN, the POM is updated, with the population of voxels ‘assigned’ to the segmentation growing slowly at the edges (hence the “flood fill” name).

The FFNs still perform poorly at cutting or alignment artifacts because they are rare enough that the model cannot learn the correct behavior at these corner cases. It also performed poorly near blood vessels or other large objects. To prevent this from affecting the results, another net was trained to detect cell bodies, neuropil, vessels, and other corner-cases; this segmentation was used to mask the regions fed into the FFN.

Despite its advantages, FFNs are still dramatically more computationally expensive than conventional segmentation methods, which means that, for now, the audience for this technology is very limited.