300 — A Connectome Based Hexagonal Lattice Convolutional Network Model of the Drosophila Visual System

Tschopp, Reiser, & Turaga (1806.04793)

Read on 16 June 2018
#neuroscience  #connectome  #connectomics  #drosophila  #fly  #flyEM  #fruit-fly  #simulation  #hexagonal  #lamina  #medulla  #CNN  #visual-system  #retina  #ommatidia  #caffe  #vision  #group:janelia 

Today’s my 300th paper! Just over 82% done, but it’s a nice round number, so I’m going to celebrate by reading about something really remarkable: using the neural network architecture of the fruit fly visual system in order to build a better machine learning algorithm.

As I’ve written about before, hexagons are a fundamental building block of the brain in just about every species with a complex enough brain to start tiling (#252 #140 #75 #15). In the fruit fly visual system, the first few “layers” of computation are likewise hexagonally tiled. The researchers virtually isolated the first two “layers” — the lamina and medulla — and digitized the synaptic connectome of 700 reconstructed hexagonal columns (roughly the same as the number of ommatidia in a Drosophila), “tied together” by medullary neurons in order to process higher-level outputs.

In this graph connectome, the synaptic multigraph was reduced to a weighted directed-graph. Morphologies were replaced with physiology-based synaptic delays which approximated the time-offset of a signal propagating to a downstream (or upstream) target.

This isn’t a great simulation of the visual system — but the point of this paper is that that doesn’t matter: Even though the function isn’t identical to endogenous conditions, the network still appears to “learn” the appropriate responses for certain cells (for example, all T4 cells “learned” their expected biological responses)…though not all cells. (Some ON and OFF responses were flipped, for example.)

This means that the challenge to generate a functional connectome simulation is far easier than we expected: Even though we might not be able to simulate biology exactly — and so learning about the underlying biology will still be a great challenge — the byproducts of this research will be applicable for use in other machine learning contexts long before we fully understand what’s happening under the hood.