Skip to content Skip to footer

Researchers at MIT, Harvard, and the University of Washington have shunned traditional reinforcement learning approaches, using crowdsourced feedback to teach artificial intelligence (AI) new skills instead. Traditional methods to teach AI tasks often required a reward function, which was updated and managed by a human expert. This limited scalability and was often time-consuming, particularly if the task was complex and multi-layered. Under the new approach, nonexpert feedback can be crowdsourced to incentivise the AI and guide it towards its eventual goal. This way, the AI can learn faster, despite caveats such as errors that can often be synonymous with crowdsourced data. By allowing crowdsourcing from multiple nonexpert users, more feedback can be gathered asynchronously worldwide. The method could conceivably facilitate faster, more independent learning in AI, with robots able to learn specific functions within a user’s home using crowdsourced feedback to guide its exploration. The notion of autonomous learning was trialled both in simulation and in real-world scenarios with the assistance of a newly-developed reinforcement learning method known as HuGE (Human Guided Exploration). The latter included robotic arms learning to draw letters and pick up objects. Researchers suggested the HuGE system could learn from other forms of communications, notably natural language and physical interaction as the system evolves. The team also highlighted the need for AI agents in any learning mechanisms to align with human values. Future expansions for the technology could include utilising HuGE to teach multiple AI agents simultaneously.

Leave a comment

0.0/5