Skip to content Skip to footer

A new technique developed by researchers from MIT promises to revolutionize how artists animate characters in video games and animated films. Utilizing mathematical functions called barycentric coordinates, which define how 2D and 3D shapes can move, bend, and stretch in space, will give animators greater control over the motion of characters.

Traditional animating methods often provide only a single option for barycentric coordinate functions for a character, limiting flexibility in achieving the desired look. This novel approach allows animators to choose or design smoothness energies for any shape, preview the transformation, and select the option that best fits their artistic vision. Such flexibility could extend beyond animation, finding applications in medical imaging, architecture, virtual reality, and computer vision, which can help robots understand the movement of objects in real life.

Barycentric coordinates were introduced by the German mathematician August Möbius in 1827, defining how the corners of a shape exert an influence over its interior. These calculations can get complicated for non-triangular or complex shapes. To overcome this, the MIT team used a special type of neural network to model the unknown barycentric coordinate functions. This kind of network processes input using layers of interconnected nodes and can generate barycentric coordinate functions that satisfy all the constraints. Having the constraints built directly into the network allows it to generate always-valid solutions, adding a much-needed mathematical convenience for artists.

To make it easy to apply to complex shapes, they created overlapping virtual triangles connecting triplets of points on the outside of the shape. The neural network then predicts how to combine these triangular coordinates into a more complex, smooth function.

An artist can then try a function, see the final animation, and make adjustments to the coordinates for different motions until the desired animation is achieved. This tool has already demonstrated its ability to generate more natural-looking animations, such as the movement of a cat’s tail, which was able to bend smoothly instead of folding rigidly.

Looking ahead, the researchers plan to accelerate the neural network and integrate this method into an interactive interface that would enable real-time animation iterations for artists. This project was partially funded by various entities including the U.S. Army Research Office, the U.S. Air Force Office of Scientific Research, the National Science Foundation, the MIT-IBM Watson AI Lab, and the Amazon Science Hub.

Leave a comment

0.0/5