Skip to content Skip to footer

Neural networks have been of immense benefit in the design of robot controllers, boosting the adaptive and effectiveness abilities of these machines. However, their complex nature makes it challenging to confirm their safe execution of assigned tasks. Traditionally, the verification of safety and stability are done using Lyapunov functions. If a Lyapunov function that consistently reduces value can be detected, it is then concluded that scenarios associated with unsafe or unstable states won’t occur, but these verification methods are not highly scalable for complex robot machines.

In a bid to surmount this hurdle, researchers from the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (MIT CSAIL), in collaboration with others, have come up with new techniques that precisely confirm Lyapunov computations in intricate systems. They developed an algorithm that systematically searches for a Lyapunov function and validates, thus providing a stability assurance for the system—a move that promises to activate safer deployment of robots and independent vehicles, aircraft, and spacecraft inclusive.

To surpass previous algorithms, these researchers adopted a frugal alternative for the training and verification process. They created cheaper counterexamples and optimized the robot system to account for these. The knowledge of these edge cases assisted the machines in learning how to manage challenging situations, which in turn increased its operational range beyond earlier prospects. They also came up with an original verification formulation that makes use of a scalable neural network verifier to offer rigorous guarantees for worst-case scenarios beyond counterexamples.

The researchers proved their model’s effectiveness by simulating a quadrotor drone stabilized in a 2D environment using only environmental data by lidar sensors. They also showcased the stable operations of different simulated robotic systems, which include an inverted pendulum and a path-tracking vehicle, which were more complex than previous scales.

Sicun Gao, a University of California at San Diego computer science and engineering associate professor, commended the innovative approach as a boost to control devices and robotics while noting it paves the way for future optimization of algorithms. It might eventually be applied across various sectors, including biomedicine and industrial processing.

While this approach improves scalability, the researchers are looking to enhance its performance in systems with higher dimensions and intend to include more diverse data forms. They also aim to extend these safety guarantees for systems operating in unstable environments and disturbances. For instance, securing a drone can maintain a stable flight and carry out assigned tasks even in severe wind conditions.

Anticipating future research developments, the team plans to apply their method to optimize problems with the target of minimizing the time and distance required by a robot to complete a task while maintaining steadiness. They also hope to extend their techniques to humanoids and other real-life machines where stability while interacting with their environment is crucial.

The research received support from Amazon, the National Science Foundation, the Office of Naval Research, and the AI2050 program at Schmidt Sciences. The researchers will present their paper at the 2024 International Conference on Machine Learning.

Leave a comment

0.0/5