Researchers at MIT have developed a technique to give animators greater control over the movements of their 2D and 3D characters. Their method generates barycentric coordinates, mathematical functions that dictate how shapes bend, stretch, and move through space. Unlike other methods, this approach offers flexibility by allowing animators to choose suitable functions for achieving desired movements.
The authors of this research are: Ana Dodik, a graduate student in electrical engineering and computer science (EECS) at MIT, Oded Stein, assistant professor at the University of Southern California’s Viterbi School of Engineering, Vincent Sitzmann, assistant professor of EECS who leads the Scene Representation Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and senior author Justin Solomon, an associate professor of EECS and leader of the CSAIL Geometric Data Processing Group.
The researchers designed a generalized approach that offers flexibility in designing smoothness energies for any shape, allowing artists to visualize the deformation of a character and select the energy offering the desired aesthetic appeal. They used a specialized neural network to model the unknown barycentric coordinate functions, avoiding the mathematical complexities by integrating the constraints directly into the network. When producing solutions, they are always valid, eliminating the need for artists to worry about the mathematical aspects of the problem.
The researchers drew on the concept of barycentric coordinates introduced by German mathematician August Möbius in 1827. In testing their technique, they noted more natural-looking animations compared to other methods. For instance, a cat’s tail curved smoothly when moving rather than folding rigidly.
Their approach also used the concept of overlapping virtual triangles that connect triplets of points on the exterior of a 2D or 3D cage to construct the barycentric coordinates. The neural network predicts how to combine these to generate a smooth but more complex function.
The researchers plan to enhance the speed of the neural network and combine the method with an interactive interface that would allow artists to iterate animations in real-time. Aside from animation, the technique had potential applications in medical imaging, architecture, virtual reality, and computer vision, even aiding robots in understanding movement in the real world. The research was presented at SIGGRAPH Asia and received funding from various entities including the U.S. Army Research Office, the National Science Foundation, and the Toyota-CSAIL Joint Research Center.