278 — Fast animal pose estimation using deep neural networks

Pereira & Aldarondo et al (10.1101/331181)

Read on 25 May 2018
#neuroscience  #pose-estimation  #3D  #deep-learning  #posture  #postural-dynamics  #neural-network  #animal-tracking 

LEAP (LEAP Estimates Animal Poses) is a recursive acronym for a new technology that tracks animals’ gaits with very little training data (the paper cites instances in which only a few frames of training are often enough for robust pose estimation). The input is a 2D video stream and the output is real-space coordinates of user-specified body parts.

LEAP comprises of three steps:

Registration and alignment, in which the animal’s bounding box is identified and the video feed is redrawn cropped to this area and rotated to face the primary axis of the animal;

Labeling and training, in which a user uses a GUI in order to manually label the positions of body parts along a rigid skeleton;

Pose estimation, in which the neural network trained in the previous step is used to determine the configuration and orientation of that same skeleton in the frames of the 2D video.

This technology enables far higher fidelity in tracking specific behaviors over time in freely behaving animals. As an example use case, the authors used this technology to analyze the gait dynamics of Drosophila, even on complex backgrounds and with multiple animals in the field of view of the camera.