Generative AI is increasingly being utilized to generate synthetic data, enhancing organizations' abilities to deal with situations where actual data may be limited or sensitive. Over the past three years, DataCebo, an MIT spinoff initiative, has been offering a generative software system known as the Synthetic Data Vault (SDV) to enable organizations to create synthetic…
Large Language Models (LLMs) can sometimes mislead users to make poor decisions by providing wrong information, a phenomenon known as 'hallucination'. To mitigate this, a team of researchers from Stanford University has proposed a new method for linguistic calibration. The new framework involves a two-step training process for LLMs.
In the first stage - supervised finetuning…
Researchers from MIT and the MIT-IBM Watson AI Lab have developed a language-based navigational strategy for AI robots. The method uses textual descriptions instead of visual information, effectively simplifying the process of robotic navigation. Visual data traditionally requires significant computational capacity and detailed hand-crafted machine-learning models to function effectively. The researchers' approach involves converting a…
AI system vulnerabilities, particularly in large language models (LLMs) and multimodal models, can be manipulated to produce harmful outputs, raising questions about their safety and reliability. Existing defenses, such as refusal training and adversarial training, often fall short against sophisticated adversarial attacks and may degrade model performance.
Addressing these limitations, a research team from Black…
Solar cells, transistors, LEDs, and batteries with boosted performance require better electronic materials which are often discovered from novel compositions. Scientists have turned to AI tools to identify potential materials from millions of chemical formulations, with engineers developing machines that can print hundreds of samples at a time, based on compositions identified by AI algorithms.…