283 — Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation

Liu & Schulte (1805.11088)

Read on 30 May 2018
#deep-learning  #machine-learning  #neural-network  #LSTM  #Canada  #hockey  #evaluation  #performance  #prediction  #behavior  #Markov  #reinforcement-learning 

This is an extremely Canada paper.

Liu and Schulte developed a deep reinforcement learning net that learned to gauge a hockey player’s performance and their influence upon the outcome of a game: This Game Impact Metric (GIM) correlates well with future career success, and remains relatively stable during a single game, which suggests that it’s both a useful metric for assessing a player as well as a relatively reliable metric at a point in time during a game.

In order to create their model, all of a player’s actions (based upon the three million actions coded in the SPORTLOGiQ dataset) were encoded as an input to the network (in contrast with some previous approaches which only used actions when a player was in possession of the puck, or limited to actions that directly resulted in a goal). From this, a $Q$ function is “learned” to predict the probability that, given the current event and the ones that led up to it, the current team scores the next goal. (Understandably,previous approaches have used a Markov model to represent this relationship.)

This function can be used to estimate the value of individual actions, which can be assigned to a particular player in order to predict the value of an individual to the outcome of the game or season. GIM correlates well — but not perfectly — with salary or total contract amount.