165 — Deep Predictive Models in Interactive Music
Martin, Ellefsen & Torresen (1801.10492)
Read on 01 February 2018I’ve read before about cooperative music generation: This paper considers musical prediction in the context of neural “prediction” — tangentially analagous to optical flow (#128).
The authors suggest that playing music together takes place when each independent player makes accurate predictions about the upcoming behaviors of other musicians in the ensemble. (This is simple when everyone is reading from sheet music, but it’s very challenging when improvising.)
How does a machine interact with human musicians? What is the feedback flow that enables high-speed musical interpretation? The authors show that neural networks (e.g. WaveNet) used by systems such as Nsynth are useful for musical sequence representation and prediction — though this is perhaps not adequate for a fully functioning interactive agent.
There are several very cool systems that the authors cite as prototypes of a fully cooperative musical agent: PiaF, or “Piano Follower”, uses hand signals captured via Kinect to control auxiliary sounds while a pianist plays; GenJam generates jazz accompaniment using genetic algorithms… it’s really exciting to see the abundance of musical agents that are only now becoming technologically feasible.