335 — Adversarial Reprogramming of Neural Networks
Elsayed, Goodfellow, & Sohl-Dickstein (1806.11146)
Read on 21 July 2018Adversarial attacks are Turing-complete, which is an interesting sentence to say.
These Google Brain researchers identified a strategy to design adversarial inputs for a neural net that causes the network to perform computation on behalf of an adversary: In particular, given a “native” function $f$ and input $x$ such that the neural net author desires output $f(x)$; and separately, given $g$ and input $y$ such that the adversary wishes to compute $g(y)$, this paper demonstrates that it is possible to convince the net that estimates $f(x)$ to, as a byproduct, compute $g(y)$.
Specifically, this requires that there exists some function $h$ where $h(y) \setin x$ and its complement, $h’$, where $h’(f(x)) \setin g(y)$. In other words, in order to perform this adversarial computation, it is necessary that the network can perform a conversion from the inputs of the adversarial function to the endogenous inputs, and from the endogenous outputs back to the outputs of the adversarial function.
A concrete example:
A network is trained on the classic ImageNet labeling task (identify the contents of an image). The input here ($x$) is any image of the appropriate shape. The function ($f$) is a converter from pixel arrays of that shape to a text label. The adversarial task is to count the number of squares in a black and white grid. The input here ($y$) is the small grid image, and the function $g$ returns an integer count of squares. $h’$ is a converter that interprets (per Figure 1 in the paper) “tiger shark” as a label output from $f$ and decides that this ‘means’ that there are four squares in the input image.
And so on.
The applications of this type of attack are endless and fascinating: Given adequate compute resources, it is possible to glean a large amount of “hijacked” compute power. In cases where these networks are public-facing — say, for example, Google’s photo identification system built into most Android phones now — that means that there is the potential for someone to use an unprotected service to conduct extremely inefficient ersatz computations on someone else’s dime — without their knowledge.