Skip to content Skip to sidebar Skip to footer

Safety

Developing and validating robust systems controlled by artificial intelligence in a systematic and adaptable manner.

Neural networks have been of immense benefit in the design of robot controllers, boosting the adaptive and effectiveness abilities of these machines. However, their complex nature makes it challenging to confirm their safe execution of assigned tasks. Traditionally, the verification of safety and stability are done using Lyapunov functions. If a Lyapunov function that consistently…

Read More

Developing and confirming robust AI-operated systems using thorough and adaptable methods.

Researchers from the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an algorithm to mitigate the risks associated with using neural networks in robots. The complexity of neural network applications, while offering greater capability, also makes them unpredictable. Current safety and stability verification techniques, called Lyapunov functions, do not…

Read More