Researchers from MIT have developed a method that could provide animators with greater control over their animations. This new technique generates mathematical functions known as barycentric coordinates, which define how 2D and 3D shapes can bend, stretch, and move through space. This allows the artist to determine the movements of animated objects according to their own vision. Traditional techniques only provide a single option of barycentric coordinate functions for a certain character, and the artist would have to start from scratch with a new approach each time they wanted to modify the look.
This new method could have applications beyond animation, such as medical imaging, architecture, virtual reality and computer vision as a tool to help robots understand how objects move in the real world.
The technique involves the use of a neural network to model the barycentric coordinate functions. The researchers’ network architecture can output barycentric coordinate functions that meet all the constraints accurately, allowing artists to design interesting barycentric coordinates without concerning themselves with the mathematical aspects of the problem.
A significant part of this method is referring back to the work of German mathematician, August Möbius, who introduced the concept of barycentric coordinates in 1827. These coordinates dictate how each corner of a shape influences the shape’s interior. However, as modern cages used in animation are much more complex than triangles, the researchers used overlapping virtual triangles that connect groups of points on the exterior of the cage. The neural network then determines how to combine these virtual triangles’ barycentric coordinates to create a more intricate but smooth function.
This technique enables artists to experiment with different functions, observe the final animation and subsequently modify the coordinates to produce different motions until they achieve the desired look. The research team demonstrated how this method could yield more natural-looking animations. Future plans include developing strategies to accelerate the neural network and incorporating this method into an interactive interface for artists to iterate on animations in real time. The research was supported by several institutions including the U.S Army Research Office and the U.S. National Science Foundation.