A new technique for maximizing control over animations has been developed by researchers at MIT. The technique offers animators the ability to mold the movements and image of characters in 2D and 3D animations to their individual requirements, through the use of barycentric coordinates, mathematical functions that determine how shapes flex, bend and move in space. This method provides a stark contrast to the majority of existing techniques, which typically limit flexibility by permitting only one function option influenced by predetermined barycentric coordinates.
The technique developed by MIT researchers allows for a more versatile approach, by letting artists choose among smoothness energies that will influence the look and feel of the animation. This means an artist can preview the impact of a particular type of smoothness energy, opt for the one that aligns with their preferences, and recreate the look they envision for the character.
The new method also incorporates the use of neural networks to model the unknown functions of the barycentric coordinates. These networks, which are loosely based on the structure of the human brain, help to output functions which comply with set rules and create textures that align with the look desired by the artist.
Despite the modernity of the new method, it relies on principles formulated in 1827 by German mathematician August Möbius, who determined how the corners of a shape influence the interior. To simplify the equations for complex shapes, the researchers covered shapes with overlapping virtual triangles, which connected triple points on the outer side of the figure. The neural network was then used to establish how best to combine these virtual triangles to create a smooth, yet complex function.
Benefits of this new method extend beyond animation, with potential for usage in sectors like medical imaging, virtual reality, architecture, and computer vision. Future plans include speeding up the neural network and integrating the method into an interactive platform for real-time editing. The research was recently presented at SIGGRAPH Asia and financed by entities including the U.S. Army Research Office, the U.S. National Science Foundation, the CSAIL Systems that Learn Program, and more.