290 — Playing Atari with Six Neurons

Cuccu, Togelius, & Cudré-Mauroux (1806.01363)

Read on 06 June 2018
#neural-network  #game  #machine-learning  #atari  #videogames  #optimization 

There have been a lot of papers recently describing neural networks playing various basic video games (and a few not-so-basic); the core tenet of these neural networks was that they had no concept of gameplay aside from the pixels available to them on the screen at any given time. This meant that the neural networks responsible for playing the game were both very large, and only understood the game score by learning to notice pixels that often represented lives-remaining or other HUD metrics.

But that’s not really how video games are: Usually, it’s easy enough to tell what’s going on, and while designing a strategy may be difficult, enumerating the possible moves is not.

This paper reduces the required size of an Atari-game-playing neural net from in the thousands of neurons to only a handful, by providing the network with a representation of the game state, rather than making it guess from scratch.

The system is simple: A Compressor converts the image-space into a brief understanding of the game state; this is fed into the network in order to learn a game-winning strategy.

This technique is so effective that a six-neuron network is enough to play certain Atari games, and more complicated others still only require 18 in order to play with state-of-the-art performance.