333 — Learning cellular morphology with neural networks

Schubert et al (10.1101/364034)

Read on 19 July 2018
#neuroscience  #machine-learning  #neural-network  #elektronn  #glia  #neurons  #segmentation  #proofreading  #electron-microscopy  #reconstruction  #connectome  #cell-type  #morphology  #group:MPI  #group:google 

This work from MPI and Google looks at the challenge of (computationally) understanding the morphology of cells in electron microscopy neuro data.

Though other work has looked at understanding the overall structure of a neuron, using datasources such as SWC files from NeuroMorpho, we know that there is a lot of knowledge encoded in the microstructure of the membrane of a cell: Using just the morphology of a cell, it is possible to classify it as glial or neural, and furthermore to probably determine subtypes of each class of cell.

But encoding 3D data for a neural network to understand is pretty challenging ([#237] [#221]). How do you feed 3D data into a net?

One strategy is to convert the data into an occupancy matrix — but that’s expensive in both compute and storage, and also loses information like manifold continuity. Instead, the authors simply rendered the 3D data from multiple perspectives, creating “multiview projections.” These renders, although 2D, provide enough 3D structural data to the network when provided in aggregate. And — almost paradoxically — this technique tends to outperform other techniques (such as passing a mesh to the net), probably due to the resolution-loss required to make 3D data net-friendly.

The main shortcoming of this 2D projection technique is self-occlusion: Neuron geometry is complex, and it’s very likely that the randomly-selected perspective for the 2D projection will contain overlaps. To fix this problem, the researchers only render small subcomponents of the 3D data in each projection. This reduces (but theoretically doesn’t eliminate) the possibility of overlap.

The result is a “neuron2vec” input to a “Cellular Morphology” network, or CMN, which takes as input a set of projected images of a neuron fragment, and can be used as a mechanism to understand functions of the morphology of a neuron as a whole.

As an example use of neuron2vec and CMNs, the authors first show how to identify merge-errors that incorrectly merge neurons with their astrocyte neighbors. Because the morphology of these two cell types differ so dramatically, (.979 F1 in classifying a cell fragment as glial or neural) it is feasible to use CMNs to identify candidate merge locations.

Determining neuron type in purely anatomical data (i.e. no ability to genetically mark neurons) is difficult. But because different neuron types can often be identified by their morphology, CMNs are also a very powerful tool for determining neuron type in absence of any other information.

I could see this being a hugely useful tool for offline validation of neuron segmentation, or perhaps even becoming an incorporated tool into the initial segmentation task itself.