309 — World Models
Read on 25 June 2018This research focuses on an agent model based upon one interpretation of how the human brain works: A Vision Model (V) forwards information to a Memory Model (M), which is integrated with previous vision input “memories” in order to provide a final Controller Model (C) with the information required to select a good action to take in the world.
In implementation, the V model is a variational autoencoder, which greatly reduces the amount of information (and processing) required to pass along to the M model. The M model is tasked therefore with predicting the probability of the V model’s output (z-vector) at a point in time in the future.
The authors then test this tripartite model in a few OpenAI simulations, including a Doom “take cover” simulation, in which enemy “monsters” throw fireballs and the user must dodge them; and a racecar driving simulation.
But the most interesting part of this paper is the way in which the authors leverage the fact that the M component is essentially capable of predicting all future V outputs: If the researchers ask the M model to predict the time+1 output of V and then use that as the ersatz output of V, then the agent, for lack of a better term, begins playing a game that it has fully dreamt up. The trained agent now can imagine its own actions based upon an environment it believes to exist.
What was most interesting to me about this alternate use of the memory model is that it enabled the agent to ‘cheat’ in its hallucinations by swimming through its latent space in such a way that its actions (C model outputs) prevented it from ever encountering fireballs. Even though this makes the model less useful in reality than in its own dreams, it’s such an interesting byproduct of the ways in which the model learned this latent space. And beyond that, having superpowers in its own ‘dreams’ is also very human.