AI firm Mistral AI has launched the Mistral Large 2 model, its latest flagship AI model. The new iteration offers significant improvements on its predecessor, with considerable ability in code generation, mathematics, reasoning, and advanced multilingual support. Furthermore, Mistral Large 2 offers enhanced function-calling capabilities and is designed to be cost-efficient, high-speed, and high-performance.
Users can…
Large Language Models (LLMs), widely used in automation and content creation, are vulnerable to manipulation by adversarial attacks, leading to significant risk of misinformation, privacy breaches, and enabling criminal activities. According to research led by Meetyou AI Lab, Osaka University and East China Normal University, these sophisticated models are open to harmful exploitation despite safety…
MIT and Harvard researchers have highlighted the divergence between human expectations of AI system capabilities and their actual performance, particularly in large language models (LLMs). The inconsistent ability of AI to match human expectations could potentially erode public trust, thereby obstructing the broad adoption of AI technology. This issue, the researchers emphasized, escalates the risk…
Remote sensing is a crucial and innovative technology that utilizes satellite and aerial sensor technologies for the detection and classification of objects on Earth. This technology plays a significant role in environmental monitoring, agricultural management, and natural resource conservation. It enables scientists to accumulate massive amounts of data over large geographical areas and timeframes, providing…
Large Language Models (LLMs) such as GPT-4, Gemini, and Claude have exhibited striking capabilities but evaluating them is complex, necessitating an integrated, transparent, standardized and reproducible framework. Despite the challenges, no comprehensive evaluation technique currently exists, which has hampered progress in this area.
However, researchers from the LMMs-Lab Team and S-Lab at NTU, Singapore, developed the…
Fundamental large language models (LLMs) including GPT-4, Gemini and Claude have shown significant competencies, matching or surpassing human performance. In this light, benchmarks are necessary tools to determine the strengths and weaknesses of various models. Transparent, standardized and reproducible evaluations are crucial and much needed for language and multimodal models. However, the development of custom…
Researchers from MIT and the University of Washington have developed a computational model to predict human behavior while taking into account the suboptimal decisions humans often make due to computational constraints. The researchers believe such a model could help AI systems anticipate and counterbalance human-derived errors, enhancing the efficacy of AI-human collaboration.
Suboptimal decision-making is characteristic…
Researchers at MIT and the University of Washington have successfully developed a model that can infer an agent's computational constraints from observing a few samples of their past actions. The findings could potentially enhance the ability of AI systems to collaborate more effectively with humans. The scientists found that human decision-making often deviates from optimal,…
A team of researchers from MIT and the MIT-IBM Watson AI Lab has developed a machine-learning accelerator that is resistant to the two most common types of cyberattacks. This ensures that sensitive information such as finance and health records remain private while still enabling large AI models to run efficiently on devices.
The researchers targeted…
Julie Shah, a distinguished scholar and academic thought-leader, is set to assume the role of head of the Department of Aeronautics and Astronautics (AeroAstro) at Massachusetts Institute of Technology (MIT), effective May 1. As affirmed by Anantha Chandrakasan, MIT’s chief innovation and strategy officer, Shah has a remarkable record of interdisciplinary leadership and visionary contributions…
Amazon SageMaker has introduced a new capability that can help reduce the time it takes for the generative artificial intelligence (AI) models it supports to automatically scale. With this enhancement, the responsiveness of AI applications can be improved when demand becomes volatile. The emergence of foundation models (FMs) and large language models (LLMs) has brought…