Alibaba's AI research division continues to establish a strong presence in the field of large language models (LLMs) with its new Qwen1.5-32B model, which features 32 billion parameters and an impressive 32k token context size. This latest addition to the Qwen series epitomizes Alibaba's commitment to high-performance computing balanced with resource efficiency.
The Qwen1.5-32B has superseded…
Researchers at the Massachusetts Institute of Technology and the MIT-IBM Watson AI Lab have developed an onboarding system that trains humans when and how to collaborate with Artificial Intelligence (AI). The fully automated system learns to customize the onboarding process according to the tasks performed, making it usable across a variety of scenarios where AI…
A group of scholars and leaders from MIT has developed policy briefs to establish a governance framework for artificial intelligence (AI). The briefs are intended to assist U.S. policymakers, sustain the country's leadership position in AI, limit potential risks from new technologies, and explore how AI can benefit society.
The primary policy paper, "A Framework for…
Over 2,000 years after Euclid's groundbreaking work in geometry, MIT associate professor Justin Solomon is using the ancient principles in fresh, modern ways. Solomon's work in the Geometric Data Processing Group applies geometry to solve a variety of problems, from comparing datasets in machine learning to enhancing generative AI models. His work assumes a variety…
Researchers at MIT and the Chinese University of Hong Kong have developed a machine learning tool to emulate photolithography manufacturing processes. Photolithography is commonly used in the production of computer chips and optical devices, manipulating light to etch features onto surfaces. Variations in the manufacturing process can cause the end products to deviate from their…
A study from MIT has suggested that machine-learning computational models can help design more effective hearing aids, cochlear implants, and brain-machine interfaces by mimicking the human auditory system. The study was based on deep neural networks which, when trained on auditory tasks, create internal representations similar to those generated in the human brain when processing…
Researchers from New York University, ELLIS Institute, and the University of Maryland have developed a model, known as Contrastive Style Descriptors (CSD), that enables a more nuanced understanding of artistic styles in digital artistry. This has been done with the aim of deciphering whether generative models like Stable Diffusion and DALL-E are merely replicating existing…
Machine learning researchers have developed a cost-effective reward mechanism to help improve how language models interact with video data. The technique involves using detailed video captions to measure the quality of responses produced by video language models. These captions serve as proxies for actual video frames, allowing language models to evaluate the factual accuracy of…
Weco AI, a leading entity in the Artificial Intelligence (AI) industry, recently launched an innovation called AIDE, an AI agent that can handle data science tasks as efficiently as a human. In a breakthrough moment, AIDE successfully performed at a human level in the renowned Kaggle competitions, an established platform for testing the abilities of…
The increasingly sophisticated language models of today need vast quantities of text data for pretraining, often in the order of trillions of words. This poses a considerable problem for smaller languages that lack the necessary resources. To tackle this issue, researchers from the TurkuNLP Group, the University of Turku, Silo AI, the University of Helsinki,…
Researchers at King's College London have conducted a study that delves into the theoretical understanding of transformer architectures, such as the model used in ChatGPT. Their goal is to explain why this type of architecture is so successful in natural language processing tasks.
While transformer architectures are widely used, their functional mechanisms are yet to…
Large language models (LLMs) have received much acclaim for their ability to understand and process human language. However, these models tend to struggle with mathematical reasoning, a skill that requires a combination of logic and numeric understanding. This shortcoming has sparked interest in researching and developing methods to improve LLMs' mathematical abilities without downgrading their…