Skip to content Skip to footer

In 2021, while studying differential equations, MIT PhD student Behrooz Tahmasebi discovered a possible connection between Weyl’s law — a mathematical formula used to measure the complexity of data contained within frequencies such as a drum or guitar string — and computer science. He hypothesized that the application of a specific version of the law could help simplify input data to a neural network and make machine learning processes faster and easier.

Weyl’s law dates back to over a century ago and was generally applied to physical situations like the vibrations of a string or the spectrum of electromagnetic radiation from a heated object. However, Tahmasebi believed that the law might be modified to help with current machine learning challenges, a concept supported by his advisor, Stefanie Jegelka, an associate professor in Electrical Engineering and Computer Science and affiliate of Computer Science and Artificial Intelligence Laboratory and the MIT Institute for Data, Systems, and Society.

Together, the two researchers succeeded in revising Weyl’s law to account for symmetry in assessing a dataset’s complexity. This had never been done before and through their work, they demonstrated that symmetries in a dataset could enhance machine learning capabilities. Their research was presented at the December 2023 Neural Information Processing Systems conference where it was given a “Spotlight” designation.

The duo’s paper explored how symmetries or “invariances” could benefit machine learning, indicating that identifying the constant characteristics — known as symmetries — of an object irrespective of its positioning or rotation could simplify and expedite the learning process. This principle extended to identifying any object, not only numerals but also to dogs or cats in images.

The objective of their research was to use the inherent symmetries within a dataset to lower the complexity of machine learning tasks, reduce the data required for learning, and answer the question of how much fewer data would be needed to train a machine learning model if the data contained symmetries.

They found two main methods of benefitting from the symmetries in a dataset. The first was related to the sample size, allowing researchers to look at a subset of the data instead of the whole — for instance, analyzing just one half of a mirror image. The second reward was an exponential gain achieved by symmetries operating over many dimensions. This gain is particularly advantageous since the complexity of a learning task increases exponentially with the dimensionality of the data space.

Their research paper includes two theorems proved mathematically; the result is a general formula that predicts the benefit gained from a specific symmetry in a given application. Importantly, this formula could also apply to symmetries yet to be discovered, implying that its potency should increase with the discovery of new symmetries.

Computer scientist Haggai Maron commented that the approach taken in the paper is groundbreaking, offering a unique geometric perspective and building a theoretical foundation for the emerging field of ‘Geometric Deep Learning’, thus guiding future developments in this rapidly growing research area.

Leave a comment

0.0/5