MIT researchers have introduced a new technique giving animation artists more control over their 2D and 3D characters. The method uses mathematical functions, known as barycentric coordinates, which determine how shapes can move, bend, and stretch in space. This allows artists to shape the movements of an animated character according to their vision.
Traditionally, artists have been limited to employing a single set of functions for a given character which might not suit the particular animation, causing the artists to approach from scratch each time they want to experiment with a different look. This new approach gives artists the flexibility to design or choose various smoothness energies for any shape. Once the artist has previewed the deformation, they can opt for the smoothness energy that conforms to their taste.
Primarily, the animator uses a ‘cage’, a simpler set of points connected by line segments or triangles, to surround and manipulate the complex shape of the character. Each move of the cage modulates the character’s movement, and this is determined by the design of a specific barycentric coordinate function.
The researchers utilized a special type of neural network to model unknown barycentric coordinate functions. The neural network was able to output barycentric coordinate functions that met all constraints. The constraints were integrated directly into the network so that the solutions generated are always valid, helping artists design fascinating barycentric coordinates without the hassle of mulling over the mathematical complexities.
The MIT team showcased how their method could produce more natural-looking animations, for example demonstrating a cat’s tail that moves smoothly instead of folding rigidly near the cage vertices. Alongside artistic applications, the technique could be utilized in sectors like medical imaging, virtual reality, architecture, and computer vision, thereby aiding robots in understanding object movement in real life.
Looking forward, the team wants to expedite the neural network and incorporate this method into an interactive interface that would facilitate real-time animation iterations. The project was presented at SIGGRAPH Asia and funded by various distinguished organizations including the U.S. Army Research Office, U.S. National Science Foundation, and the MIT-IBM Watson AI Lab.