A novel technique unveiled by researchers at MIT could provide artists more flexibility while animating characters in movies and video games. The approach involves producing mathematical functions called barycentric coordinates to determine how 2D and 3D shapes can bend, stretch, and manoeuvre through space. Artists are thus provided with multiple options of barycentric coordinate functions to create animations that match their vision. The technique can also find applications in fields such as virtual reality, medical imaging, architecture, and computer vision to comprehend how objects move.
Present techniques offer only one barycentric coordinate function for animating a character, and artists would need an entirely new method for every minor modification in the character’s look. “Artists care about flexibility and the ‘look’ of their final product,” said Ana Dodik, lead author of the research paper, highlighting that artists seek diverse barycentric coordinate functions for their animations. The researchers developed a method to allow artists to decide the smoothness of any shape, preview it, and adjust it to their liking.
The method uses a neural network to model the indeterminate barycentric coordinate functions, which are built into the network and ensure that the outputs always satisfy all constraints. Thus, artists can concentrate on creating enticing barycentric coordinates without worrying about the mathematical aspects.
Traditional barycentric coordinates, introduced by August Möbius in 1827, are simple to calculate but only for triangle shapes. In the contemporary world, animating cages are more complicated than mere triangles. Bridging this gap, the MIT team used overlapping virtual triangles within the shape that link points on the cage’s periphery. The neural network then combines these barycentric coordinates for a complex but seamless function.
Using this method, artists can experiment with different functions, preview the final animation, and adjust the coordinates until the animation matches their vision. The method was demonstrated to produce more realistic-appearing animations than existing approaches. In the future, researchers plan to boost the neural network’s speed and incorporate the method into an interactive platform to enable real-time animation iterations for artists. This research has been funded by several science and technology agencies, research centres, and companies.