Researchers from Carnegie Mellon University are examining the challenge of a mediator coordinating a group of strategic agents without knowledge of their underlying utility functions, referred to as multi-agent imitation learning (MAIL). This is a complex issue as it involves providing personalised, strategic guidance to each agent without a comprehensive understanding of their circumstances or…
Researchers from MIT and the MIT-IBM Watson AI Lab have developed a hardware solution that enhances the security of machine-learning applications on smartphones. Current health-monitoring apps require large amounts of data to be transferred back and forth between the phone and a central server, which can create security vulnerabilities and inefficiency. To counter this, the…
Researchers from MIT and MIT-IBM Watson AI Lab have created a machine-learning accelerator that is resistant to the most common types of cyber attacks. The chip can hold users' sensitive data such as health records and financial information, enabling large AI models to run efficiently on devices while maintaining privacy. The accelerator maintains strong security,…
Roboticists and researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) are working to develop a system that can train robots to perform tasks in specific environments effectively. The ongoing research aims to help robots deal with disturbances, distractions, and changes in their operational environments. For this, they have proposed a method to create…
Relational databases are fundamental to many digital systems, playing a critical role in data management across a variety of sectors, including e-commerce, healthcare, and social media. Through their table-based structure, they efficiently organize and retrieve data that's crucial to operations in these fields, and yet, the full potential of the valuable relational information within these…
Artificial Intelligence (AI) that can work effectively with humans requires a robust model of human behaviour. However, humans often behave irrationally, limiting their decision-making abilities.
Researchers at MIT and the University of Washington have developed a model for predicting an agent's behaviour (whether human or machine) by considering computational constraints that affect problem-solving, which they refer…
Health-monitoring apps powered by advanced machine-learning (ML) models could be more secure and still run efficiently on devices, according to researchers from MIT and the MIT-IBM Watson AI Lab. Though the models require vast amounts of data shuttling between a smartphone and a central memory server, using a machine-learning accelerator can speed up the process…
The phenomenon of "model collapse" represents a significant challenge in artificial intelligence (AI) research, particularly impacting large language models (LLMs). When these models are continually trained on data created by earlier versions of similar models, they lose their ability to accurately represent the underlying data distribution, deteriorating in effectiveness over successive generations.
Current training methods of…
The rapid development of Transformer models in natural language processing (NLP) has brought about significant challenges, particularly with memory requirements for the training of these large-scale models. A new paper addresses these issues by presenting a new methodology called MINI-SEQUENCE TRANSFORMER (MST) which optimizes memory usage during long-sequence training without compromising performance.
Traditional approaches such as…