Skip to content Skip to sidebar Skip to footer

MIT-IBM Watson AI Lab

This small microchip can protect user information while promoting effective processing on a mobile device.

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a novel machine-learning accelerator that can protect sensitive data like health records from two common types of cybersecurity threats while efficiently running large AI models. This advancement could make an noticable impact on challenging AI applications, such as augmented and virtual reality, autonomous driving…

Read More

This small microchip can protect user information while also facilitating effective processing on a mobile phone.

Researchers from the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Lab have developed a machine-learning accelerator designed to be resistant to cyber-attacks, offering a secure platform for health-monitoring applications. The chip secures users' data whilst running large artificial intelligence (AI) models efficiently, protecting sensitive health and financial information. The technology is capable of…

Read More

This small microchip can protect user information while facilitating effective processing on a mobile phone.

Health-monitoring applications have become pivotal in managing chronic diseases and tracking fitness goals, largely due to the advent of machine-learning powered tools. However, these applications are often slow and energy-inefficient, largely due to the massive machine-learning models that require transfer between a smartphone and a central memory server. Despite the development of machine-learning accelerators that…

Read More

This miniature circuit can protect user information while facilitating effective calculations on a mobile phone.

A team of researchers from MIT and the MIT-IBM Watson AI Lab has developed a machine-learning accelerator that is resistant to the two most common types of cyberattacks. This ensures that sensitive information such as finance and health records remain private while still enabling large AI models to run efficiently on devices. The researchers targeted…

Read More

This small microchip can protect user information while facilitating effective computation on a mobile phone.

Health-monitoring apps can help individuals manage chronic diseases and keep up with their fitness goals. However, these apps can often be slow and energy-inefficient due to the machine-learning models they use, which need a significant amount of data shuffling between the smartphone and a central memory server. Engineers typically use hardware (machine-learning accelerators) to streamline…

Read More

This miniature microchip has the capability to protect user information while also enhancing the effectiveness of computations on a mobile device.

Read More

This miniature microchip can protect user information while facilitating effective computing on a mobile phone.

Health-monitoring apps that assist people in managing chronic diseases or tracking fitness goals work with the help of large machine-learning models, which are often shuttled between a user's smartphone and a central memory server. This process can slow down the app's performance and drain the energy of the device. While machine-learning accelerators can help to…

Read More

This compact semiconductor ensures the protection of user information while facilitating proficient processing on a mobile phone.

Researchers at the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Lab have developed a machine learning accelerator chip that is resistant to the most common types of cyberattacks, ensuring data privacy while supporting efficient AI model operations on devices. The chip can be used in demanding AI applications like augmented and virtual…

Read More

Researchers from MIT have made significant progress in enhancing the automatic understanding in AI models.

As AI models become increasingly integrated into various sectors, understanding how they function is crucial. By interpreting the mechanisms underlying these models, we can audit them for safety and biases, potentially deepening our understanding of intelligence. Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have been working to automate this interpretation process, specifically…

Read More

An improved, speedier method to inhibit an AI chatbot from providing harmful responses.

While artificial intelligence (AI) chatbots like ChatGPT are capable of a variety of tasks, concerns have been raised about their potential to generate unsafe or inappropriate responses. To mitigate these risks, AI labs use a safeguarding method called "red-teaming". In this process, human testers aim to elicit undesirable responses from the AI, informing its development…

Read More

An improved, quicker method to stop an AI chatbot from providing harmful responses.

Artificial Intelligence (AI) Chatbots like OpenAI's ChatGPT are capable of performing tasks from generating code to writing article summaries. However, they can also potentially provide information that could be harmful. To prevent this from happening, developers use a process called red-teaming, where human testers write prompts to identify unsafe responses in the model. Nevertheless, this…

Read More