A new technique developed by MIT researchers aims to give artists more control over their animations. The approach, which generates barycentric coordinates, allows artists to define how 2D and 3D shapes can bend, stretch, and move in space and creates an artifact that fits their vision better. This is a departure from available techniques which offer only one option for barycentric coordinate function.
Ana Dodik, the technique’s lead researcher and an electrical engineering and computer science (EECS) graduate student, emphasizes that artists prioritize flexibility and the final aesthetic of their work, rather than efficient algorithms. This innovative approach could also find applications in fields such as medical imaging, virtual reality, architecture, and robotics.
Typically, when animating a complex 2D or 3D character, artists use a simpler set of points bound by lines or triangles, called a cage. The shape inside the cage moves and morphs according to the animated motions of the cage. The barycentric coordinate function is responsible for depicting how the character moves when the cage modifies. However, different interpretations of mathematical “smoothness” can lead to varying barycentric coordinate functions.
The MIT researchers aimed to develop a universal tool allowing artists to select the smoothness to their liking. The team adopted a special type of neural network to model these barycentric coordinate functions. The power of this network lies in the way it considers constraints when producing solutions, which ensures their validity. Consequently, artists can focus on creativity without concerning themselves with complex mathematics.
Using the principle of overlapping virtual triangles, introduced by German mathematician August Möbius almost 200 years ago, the researchers created valid barycentric coordinate functions. The neural network then combines these functions. With this method, the artists can refine the barycentric coordinates until they achieve the desired animation.
The researchers showcased the tool’s efficacy with a variety of natural-looking animations such as the smooth motion of a cat’s tail. Plans are underway to accelerate the neural network and incorporate this method into an interactive interface to streamline artistic processes further. The research benefited from funding from several collaborators including the U.S. Army Research Office, the U.S. National Science Foundation, and the MIT-IBM Watson AI Lab.