Skip to content Skip to footer

DeepMind educates robotic soccer players to perform actions like kicking, blocking, and defending.

Google’s DeepMind researchers have made a remarkable achievement in robotics by their successful training of humanoid robots to play soccer without manual programming. The research, detailed in a study published in Science Robotics, used deep reinforcement learning (RL) to teach commercially available 20-inch tall Robotis OP3 robots intricate movement and gameplay skills. Through trial and error in simulated environments, the robots learned to perform actions like running, kicking, blocking, recovering from falls, and scoring goals.

The training approach began by teaching individual foundational skills such as walking, kicking, and getting up through neural networks known as “skill policies”. Each skill was mastered in a focused environment where the robot earned rewards for competency. These individual skill policies were then merged into a single master policy network, via a method called policy distillation. The master policy was capable of determining the necessary skill based on different situations.

In further refinements, the robots were engaged in self-play, where the arms faced earlier versions of themselves in simulated soccer matches. This iterative process led to continuous improvements in strategy and gameplay. To align the program closely with real-world physical conditions, factors such as friction and robot mass distribution were anomalous in the simulated training setting. Consequently, the program became robust to physical variations and was ultimately uploaded to real-world OP3 robots that played soccer matches without needing additional adjustments.

Even though visually observing these robots is crucial to understanding the remarkable results, they exhibit dynamic and swift movements, quick direction-changing spins, and coordinated limb movements to balance and kick at the same time. Adding background layers to the neural networks showed that the AI had developed a natural understanding of soccer strategies, including possession value and goal defense in response to an opponent’s approach.

An exciting revelation from this development is that the AI-driven training approach significantly improved the robot’s performance compared to a traditional rule-based policy. With the RL approach, the robots could walk 181% faster, turn 302% faster, recover from falls 63% quicker, and kick 34% harder.

Looking beyond the success of their robot footballers, DeepMind is progressively digitizing the sports realm by optimizing AI-driven football coaching in collaboration with Liverpool FC. The foreseeable future may see the emergence of a competitive Robot League, where custom robots face each other in high-stakes sports.

Leave a comment

0.0/5