Skip to content Skip to sidebar Skip to footer

Machine learning

Scientists improve side vision capabilities in AI modules.

Researchers from MIT have developed an image dataset that simulates peripheral vision in machine learning models, improving their object detection capabilities. However, even with this modification, the AI models still fell short of human performance. The researchers discovered that size and visual clutter, factors that impact human performance, largely did not affect the AI's ability.…

Read More

Three Inquiries: Essential Information on Audio Deepfakes You Should Understand

Audio deepfakes have recently been in the news, particularly in regards to their negative impacts, such as fraudulent robocalls pretending to be Joe Biden, encouraging people not to vote. These malicious uses could negatively affect political campaigns, financial markets, and lead to identity theft. However, Nauman Dawalatabad, a postdoc student at MIT, argues that deepfakes…

Read More

Improving Reliability in Large Linguistic Models: Refining for Balanced Uncertainties in Critical Use-Cases

Large Language Models (LLMs) present a potential problem in their inability to accurately represent uncertainty about the reliability of their output. This uncertainty can have serious consequences in areas such as healthcare, where stakeholder confidence in the system's predictions is critical. Variations in freeform language generation can further complicate the issue, as these cannot be…

Read More

Enhancing AI Model Generalizability and Performance: New Loss Functions for Optimal Choices

Artificial Intelligence (AI) aims to create systems that can execute tasks normally requiring human intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding. Such technologies are highly beneficial in various industries such as healthcare, finance, transportation, and entertainment. Consequently, optimizing AI models to efficiently and precisely perform these tasks is a significant challenge…

Read More

Understanding Minima Stability and Larger Learning Rates: Expanding on Gradient Descent within Over-Parametrized ReLU Networks

Neural networks using gradient descent often perform well even when overparameterized and initialized randomly. They frequently find global optimal solutions, achieving zero training error without overfitting, a phenomenon referred to as "benign overfitting." However, in the case of Rectified Linear Unit (ReLU) networks, solutions can lead to overfitting if they interpolate the data. Particularly in…

Read More

Improving software testing through the utilization of generative AI

Generative AI has vast potential in creating synthetic data that can mimic real-world scenarios, which in turn can aid organizations in improving their operations. In line with this, DataCebo, a spinout from MIT, has developed a generative software system referred to as the Synthetic Data Vault (SDV), which has been employed by thousands of data…

Read More

Scientists improve the side vision capabilities in artificial intelligence models.

Peripheral vision, most humans' mechanism to see objects not directly in their line of sight, although with less detail, does not exist in AI. However, researchers at MIT have made significant progress towards this by developing an image dataset to simulate peripheral vision in machine learning models. The research indicated that models trained with this…

Read More

Three Queries: Essential Information Regarding Deepfakes in the Audio Realm

Nauman Dawalatabad, a postdoctoral researcher discusses the concerns and potential benefits of audio deepfake technology in a Q&A with MIT News. He addresses ethical considerations regarding the concealment of a source speaker’s identity in audio deepfakes, noting that speech contains a wealth of sensitive personal information beyond identity and content, such as age, gender and…

Read More

A recent research by Google unveils the Personal Health Large Language Model (Ph-LLM), an iteration of Gemini optimized for comprehending numerical time-series data related to personal health.

Large language models (LLMs), flexible tools for language generation, have shown promising potential in various areas, including medical education, research, and clinical practice. LLMs enhance the analysis of healthcare data, providing detailed reports, medical differential diagnoses, standardized mental functioning assessments, and delivery of psychological interventions. They extract valuable information from 'clinical data', illustrating their possible…

Read More

Overcoming Breakdown in AI Models Scaling through Enhanced Artificial Data Reinforcement

A growing reliance on AI-generated data has led to concerns about model collapse, a phenomenon where a model's performance significantly deteriorates when trained on synthesized data. This issue has the potential to obstruct the development of methods for efficiently creating high-quality text summaries from large volumes of data. Currently, the methods used to prevent model…

Read More

Improving software testing through the application of generative AI.

Generative AI, which can create text and images, is becoming an essential tool in today's data-driven society. It's now being utilized to produce realistic synthetic data, which can effectively solve problems where real data is limited or sensitive. For the past three years, DataCebo, an MIT spinoff, has been offering a Synthetic Data Vault (SDV)…

Read More

Scientists improve sideline sight in AI prototypes.

MIT researchers are replicating peripheral vision—a human's ability to detect objects outside their direct line of sight—in AI systems, which could enable these machines to more effectively identify imminent dangers or predict human behavior. By equipping machine learning models with an extensive image dataset to imitate peripheral vision, the team found these models were better…

Read More