21 — Hybrid computing using a neural network with dynamic external memory

Graves et al (10.1038/nature20101)

Read on 11 September 2017
#neural-network  #deep-learning  #computation  #memory-access 

This paper introduces a DNC, or Differentiable Neural Computer, that is distinct from conventional neural networks in that DNCs posses a key-value store that acts as “long-term memory” that the network can access like a CPU accesses RAM. The authors demonstrate that this addition to the architecture of a neural network makes it possible to structure data in a meaningful way on “disk”, which grants the network a more powerful memory for patterns.

This memory is able to store more information than the weights of the network can hold themselves, meaning that relationships can be encoded that otherwise wouldn’t be possible to establish in a network. I could see applications for this in text-generation, which nets are currently pretty bad at, or image refinement, which currently seems to get “stuck” on complex textures or large scenes, because the context of text or images can change so dramatically in the wild (e.g. not in a controlled setting).

In particular, the mechanism for memory is a large matrix where each cell holds a value and has individually editable weights for each neuron in the network.

The paper makes mention of hippocampal CA1 and CA3 synapses as a biomimetic model for Long/Short-Term Memory (LSTM), though I’m not convinced it’s really feasible to model a neuro-inspired memory model quite yet, since neuron-level memory through network connectivity isn’t well-characterized enough yet to draw a neuron-by-neuron map. But this work shows promise to begin to learn how networks might store information in a “less-accessible” place for later reference.

The work is published at DeepMind.com.