The Theory of Inventive Problem Solving (TRIZ) is a widely recognized method of ideation that uses the knowledge derived from a large, ongoing patent database to systematically invent and solve engineering problems. TRIZ is increasingly incorporating various aspects of machine learning and natural language processing to enhance its reasoning process.
Now, researchers from both the Singapore…
Large Language Models (LLMs), such as OpenAI’s GPT-4 and GPT-3.5, offer robust conversational abilities and can integrate with external interfaces. These AI technologies hold potential for task automation and support in various business applications. However, the challenge lies in striking a balance between performance and cost. While GPT-4 offers high quality, it struggles with issues…
Artificial intelligence, particularly large language models (LLMs), faces the critical challenge of balancing model performance and practical constraints such as privacy, cost, and device compatibility. Large cloud-based models that offer high-accuracy rely on constant internet connectivity, raising potential issues of privacy breaches and high costs. Deploying these models on edge devices introduces further challenges in…
In the dynamic environment of Artificial Intelligence (AI), the constant challenge for businesses is managing immense quantities of unstructured data. To this end, the pioneering open-source AI project, RAGFlow is set to redefine the way organizations derive insights and respond to complex inquiries with remarkable truthfulness and precise accuracy.
RAGFlow is an avant-garde engine that…
Transformers have revolutionized Natural Language Processing (NLP) with Large Language Models (LLMs), such as OpenAI's GPT series, BERT, and Claude series, etc. The advancement of Transformer Architecture brought about a new way of building models designed to understand and accurately generate human language.
The Transformer Model was introduced in 2017 through a research paper titled "Attention…
The transformer model has become a crucial technical component in AI, transforming areas such as language processing and machine translation. Despite its success, a common criticism is its standard method of uniformly assigning computational resources across an input sequence, failing to acknowledge the varying computational demands of different parts of a data sequence. This simplified…
Alibaba's AI research division continues to establish a strong presence in the field of large language models (LLMs) with its new Qwen1.5-32B model, which features 32 billion parameters and an impressive 32k token context size. This latest addition to the Qwen series epitomizes Alibaba's commitment to high-performance computing balanced with resource efficiency.
The Qwen1.5-32B has superseded…
Researchers at the Massachusetts Institute of Technology and the MIT-IBM Watson AI Lab have developed an onboarding system that trains humans when and how to collaborate with Artificial Intelligence (AI). The fully automated system learns to customize the onboarding process according to the tasks performed, making it usable across a variety of scenarios where AI…
A group of scholars and leaders from MIT has developed policy briefs to establish a governance framework for artificial intelligence (AI). The briefs are intended to assist U.S. policymakers, sustain the country's leadership position in AI, limit potential risks from new technologies, and explore how AI can benefit society.
The primary policy paper, "A Framework for…
Over 2,000 years after Euclid's groundbreaking work in geometry, MIT associate professor Justin Solomon is using the ancient principles in fresh, modern ways. Solomon's work in the Geometric Data Processing Group applies geometry to solve a variety of problems, from comparing datasets in machine learning to enhancing generative AI models. His work assumes a variety…
Researchers at MIT and the Chinese University of Hong Kong have developed a machine learning tool to emulate photolithography manufacturing processes. Photolithography is commonly used in the production of computer chips and optical devices, manipulating light to etch features onto surfaces. Variations in the manufacturing process can cause the end products to deviate from their…
A study from MIT has suggested that machine-learning computational models can help design more effective hearing aids, cochlear implants, and brain-machine interfaces by mimicking the human auditory system. The study was based on deep neural networks which, when trained on auditory tasks, create internal representations similar to those generated in the human brain when processing…