Team Members

Nate Nichols
Jiahui Liu
Forrest Sondahl

Project GestureMap

When humans converse, they don't just use their voices.  They use their whole bodies. Choose a random person some time, and closely watch their hands while they talk.   You will find that there is a constant flow of movement coordinated with their speech.   Although it is not obvious how each gesture may relate to the individual words being spoken, the overall effect is natural.  So natural, in fact, that unless you are purposefully thinking about it, you probably don't notice how much gesturing goes on!  Although these gesticulations come easily to humans, simulating these natural patterns in computer animation is a challenging task.

This is what Project GestureMap is all about.  Suppose that a virtual person (avatar) is given some text to speak – we want a way to choose appropriate gesture animations to accompany it.  If we knew ahead of time what the text would be, we could (as many professional animators do) hand pick the animations to match.   But we don't know the text beforehand, so we need an automated system to assign animations for us.  To this end, we propose a machine learning approach for gesture assignment. The key idea here is to take a collection of scenes (containing both text and animation) that were designed by professional animators and use them as training data for a machine learning technique.

Background

Project GestureMap is the result of a final project for EECS 395-22 Machine Learning at Northwestern University, taught by Professor Bryan Pardo.

Project GestureMap was motivated by a larger project called News At Seven , an automatic system that crafts daily news shows with customizable content, using a team of virtual actors in a virtual studio.

One goal of Project GestureMap is to improve the quality of human-realistic gestures used by the virtual newscasters when reading news stories aloud.