Constrained Synthesis of Textural Motion for Animation

Shmuel Moradoff and Dani Lischinski
 

Abstract

Obtaining high quality, realistic motions of articulated characters is both time-consuming and expensive, necessitating the development of easy-to-use and effective tools for motion editing and reuse. We propose a new simple technique for generating constrained variations of different lengths from an existing captured or otherwise animated motion. Our technique is applicable to textural motions, such as walking or dancing, where the motion sequence can be decomposed into shorter motion segments without an obvious temporal ordering among them. Inspired by previous work on texture synthesis and video textures, our method essentially produces a re-ordering of these shorter segments. Discontinuities are eliminated by carefully choosing the transition points and applying local adaptive smoothing in their vicinity, if necessary. The user is able to control the synthesis process by specifying a small number of simple constraints.


 

Results

 

Drunk walk - middle figure is the original data, synthesis time was 37 and 43 sec. with 7 and 8 constraints.

drunk.mpg (5.84 MB)

drunk-path.mpg (5.61 MB)

High Wire - original data in the back. synthesis time 32 sec. with 6 constrains.

highwire.mpg (5.85 MB)

cool walk - original data is the left character. synthesis time 30 sec. with 8 constrains.

cool.mpg (5.9 MB)

cool-path.mpg (5.6 MB)

Ballet Walk - original data is on the top row and synthesized data in the bottom row is a loop along an 8-shape path. synthesis time 81 sec. with 9 constrains.

ballet-original.mpg (1.46 MB)

ballet-eight.mpg (5.84 MB)