Skip to content Skip to sidebar Skip to footer

Machine learning

Researchers from Carnegie Mellon University Study Guidance from Experts and Strategic Departures in Multi-Agent Imitation Learning.

Researchers from Carnegie Mellon University are examining the challenge of a mediator coordinating a group of strategic agents without knowledge of their underlying utility functions, referred to as multi-agent imitation learning (MAIL). This is a complex issue as it involves providing personalised, strategic guidance to each agent without a comprehensive understanding of their circumstances or…

Read More

To create a superior AI assistant, begin by replicating the unpredictable actions of human beings.

Read More

This small microchip can protect user information while aiding in the effective operation of a mobile phone.

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a hardware solution that enhances the security of machine-learning applications on smartphones. Current health-monitoring apps require large amounts of data to be transferred back and forth between the phone and a central server, which can create security vulnerabilities and inefficiency. To counter this, the…

Read More

This small microchip can protect user information while facilitating effective computing on a mobile phone.

Researchers from MIT and MIT-IBM Watson AI Lab have created a machine-learning accelerator that is resistant to the most common types of cyber attacks. The chip can hold users' sensitive data such as health records and financial information, enabling large AI models to run efficiently on devices while maintaining privacy. The accelerator maintains strong security,…

Read More

Home automation robots learn through an authentic simulation-to-reality cycle.

Roboticists and researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) are working to develop a system that can train robots to perform tasks in specific environments effectively. The ongoing research aims to help robots deal with disturbances, distractions, and changes in their operational environments. For this, they have proposed a method to create…

Read More

Stanford researchers introduce RelBench: A Public Benchmark for Deep Learning within Relational Databases.

Relational databases are fundamental to many digital systems, playing a critical role in data management across a variety of sectors, including e-commerce, healthcare, and social media. Through their table-based structure, they efficiently organize and retrieve data that's crucial to operations in these fields, and yet, the full potential of the valuable relational information within these…

Read More

To enhance AI assistant development, begin by emulating the unpredictable conduct of humans.

Artificial Intelligence (AI) that can work effectively with humans requires a robust model of human behaviour. However, humans often behave irrationally, limiting their decision-making abilities. Researchers at MIT and the University of Washington have developed a model for predicting an agent's behaviour (whether human or machine) by considering computational constraints that affect problem-solving, which they refer…

Read More

This compact semiconductor provides protection for user information while facilitating streamlined computation on a mobile device.

Health-monitoring apps powered by advanced machine-learning (ML) models could be more secure and still run efficiently on devices, according to researchers from MIT and the MIT-IBM Watson AI Lab. Though the models require vast amounts of data shuttling between a smartphone and a central memory server, using a machine-learning accelerator can speed up the process…

Read More

This AI Study Demonstrates AI Model Breakdown as Consecutive Model Generations are Sequentially Trained on Simulated Data.

The phenomenon of "model collapse" represents a significant challenge in artificial intelligence (AI) research, particularly impacting large language models (LLMs). When these models are continually trained on data created by earlier versions of similar models, they lose their ability to accurately represent the underlying data distribution, deteriorating in effectiveness over successive generations. Current training methods of…

Read More

Enhancing Memory for Extensive NLP Models: An Examination of Mini-Sequence Transformer

The rapid development of Transformer models in natural language processing (NLP) has brought about significant challenges, particularly with memory requirements for the training of these large-scale models. A new paper addresses these issues by presenting a new methodology called MINI-SEQUENCE TRANSFORMER (MST) which optimizes memory usage during long-sequence training without compromising performance. Traditional approaches such as…

Read More