234 — Discovering Hidden Factors of Variation in Deep Networks
Cheung & Livezey et al (1412.6583)
Read on 11 April 2018what happens when a NN trained on 0/1 emotion labels tries to extrapolate & generate faces with labels from -5...+5. the funniest figure I've ever seen c/o @thisismyhat, @JesseLivezey & co pic.twitter.com/IwSkAuirLj
— Alexander Huth (@alex_ander) April 10, 2018
One of the goals of this paper is to demonstrate that deep neural networks encode not only the information required for learning a classification or categorization task, but also the context of that information in a larger space: For example, if the classifier learns to distinguish between ripe and underripe fruits, it probably also includes the information to determine if a fruit is inedibly underripe, or perhaps starting to go bad.
To demonstrate this, the authors took a variety of paths: One included the image above: They trained a neural network to distinguish between different levels of emotion, and then asked it to produce faces with emotion levels greater than anything it had enountered in the training data. The results are funny — but are also very telling: The net produces imagery that looks very similar to what we’d expect.
The presence of this information, which is only indirectly related to the performance of the net on its specified task, is proof that at least some networks are storing world-model-like information that corresponds to higher-order factors of continuous variation, even if they are only trained on a classification task. This means that these networks are probably storing a continuum but only outputting a label.