280 — Mode-Adaptive Neural Networks for Quadruped Motion Control

Zhang & Starke et al (10.1145/3197517.3201366)

Read on 27 May 2018
#animation  #deep-learning  #machine-learning  #neural-network  #quadruped  #gait-estimation  #pose-estimation  #walking  #3D  #postural-dynamics  #locomotion 

One of my favorite veg-out things to do is to watch SIGGRAPH demo videos on YouTube. I know, I know.

But that’s how I stumbled upon this research, which has a really incredible demo video available online.

Walking with more than two legs is surprisingly complex: I even wrote about a system designed to automate the motion-capture process a few days ago. Synthesizing this behavior from scratch is outrageously tricky, and most video games cheat by animating a walk-cycle on top of a moving character. Laaaaaaaame. (Even high-budget films are known to try to avoid capturing a complex animated character’s walk-cycle on camera when possible.) But this isn’t just about entertainment: Robotic walk cycles are just as tricky to design.

This deep learning approach combines motion-capture data input from recordings of quadrupeds (mostly dogs) with intelligent understandings about “corner case” behavior: What does it look like for a quadruped to go from a fast pace to a slower pace — and how does the skeleton and muscle position adjust in realtime? How does the body switch from a seated position to a walking position? How does the body handle very sharp turns that prevent normal walking and navigation behavior?

This system is “modal” meaning that it has a high-level understanding of the “mode” the animal is in (i.e. walking, running, sitting, turning, etc). This enables the system to determine the points at which it must switch mode, and adjust in a way that — unlike common techniques in use for video games et al today — is more complicated than switching animation to a different walk-cycle.