Skip to content Skip to sidebar Skip to footer

Robotics

Julie Shah has been appointed as the leader of the Department of Aeronautics and Astronautics.

Julie Shah, a distinguished scholar and academic thought-leader, is set to assume the role of head of the Department of Aeronautics and Astronautics (AeroAstro) at Massachusetts Institute of Technology (MIT), effective May 1. As affirmed by Anantha Chandrakasan, MIT’s chief innovation and strategy officer, Shah has a remarkable record of interdisciplinary leadership and visionary contributions…

Read More

Developing and validating robust systems controlled by artificial intelligence in a systematic and adaptable manner.

Neural networks have been of immense benefit in the design of robot controllers, boosting the adaptive and effectiveness abilities of these machines. However, their complex nature makes it challenging to confirm their safe execution of assigned tasks. Traditionally, the verification of safety and stability are done using Lyapunov functions. If a Lyapunov function that consistently…

Read More

Developing and confirming robust AI-operated systems using thorough and adaptable methods.

Researchers from the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an algorithm to mitigate the risks associated with using neural networks in robots. The complexity of neural network applications, while offering greater capability, also makes them unpredictable. Current safety and stability verification techniques, called Lyapunov functions, do not…

Read More

Researchers from Google’s DeepMind present “Mobility VLA”, a method for navigation instructions combining Long-Context VLMs and Topological Graphs in a multimodal approach.

Advancements in sensors, artificial intelligence (AI), and processing power have paved the way for new possibilities in robot navigation. Many research studies suggest bridging the natural language space of ObjNav and VLN to a multimodal space allowing robots to follow both text and image-based instructions simultaneously. This approach is called Multimodal Instruction Navigation (MIN). MIN encapsulates…

Read More

Researchers from Google’s DeepMind introduce Mobility VLA: A multimodal guide navigation system utilizing extended-context VLMs and topological diagrams.

Recent technological advancements have enhanced robot navigation to great extents, particularly with the integration of AI, sensors, and improved processing power. Several studies advocate for the transition of the natural language space of ObjNav and VLN to a multimodal space, enabling robots to simultaneously follow commands in both text and image formats. This type of…

Read More

Hyperion: An Innovative, Modular Framework for High-Performance Optimization Tailored for Both Discrete and Continuous-Time SLAM Applications

The positioning and tracking of a sensor suite within its environment is a critical element in robotics. Traditional methods known as Simultaneous Localization and Mapping (SLAM) confront issues with unsynchronized sensor data and require demanding computations, which must estimate the position at distinct time intervals, complicating the handling of unequal data from multiple sensors. Despite…

Read More

A Basic Model-Free Open-Loop Baseline for Reinforcement Learning Mobility Tasks that Does Not Require Sophisticated Models or Computational Resources

Deep Reinforcement Learning (DRL) is advancing robotic control capabilities, albeit with a rising trend of algorithm complexity. These complexities lead to challenging implementation details, impacting the reproducibility of sophisticated algorithms. This issue, therefore, necessitates the need for simpler machine learning models that are not as computationally demanding. A team of international researchers from the German Aerospace…

Read More

Designing domestic robots to possess a bit of general knowledge.

As robots are increasingly being deployed for complex household tasks, engineers at MIT are trying to equip them with common-sense knowledge allowing them to swiftly adapt when faced with disruptions. A newly developed method by the researchers merges robot motion data and common-sense knowledge from extensive language models (LLMs). The new approach allows a robot to…

Read More