Skip to content Skip to sidebar Skip to footer

Language Model

AURORA-M: A global, open-source AI model with 15 billion parameters, trained in several languages, including English, Finnish, Hindi, Japanese, the Vietnamese and Code.

The impressive advancements that have been seen in artificial intelligence, specifically in Large Language Models (LLMs), have seen them become a vital tool in many applications. However, the high cost associated with the computational power needed to train these models has limited their accessibility, stifling wider development. There have been several open-source resources attempting to…

Read More

Researchers in Artificial Intelligence at Google suggest a training approach referred to as the Noise-Aware Training Technique (NAT) for Language Models that understand layouts.

Visually rich documents (VRDs) such as invoices, utility bills, and insurance quotes present unique challenges in terms of information extraction (IE). The varied layouts and formats, coupled with both textual and visual properties, require complex, resource-intensive solutions. Many existing strategies rely on supervised learning, which necessitates a vast pool of human-labeled training samples. This not…

Read More

LASP: A Streamlined Machine Learning Technique Specifically Designed for Linear Attention-Based Linguistic Models

Researchers from the Shanghai AI Laboratory and TapTap have developed a Linear Attention Sequence Parallel (LASP) technique that optimizes sequence parallelism on linear transformers, side-stepping the limitations led by the memory capacity of a single GPU. Large language models, due to their significant size and long sequences, can place a considerable strain on graphical unit…

Read More

IsoBench: A Benchmark Dataset for Artificial Intelligence covering Four Broad Domains: Mathematics, Science, Algorithms, and Gaming.

Large language models and multimodal foundation models like GPT4V, Claude, and Gemini, that blend visual encoders and language models, have made profound strides in the realms of Natural Language Processing (NLP) and Natural Language Generation (NLG). They show impressive performance when working with text-only inputs or a combination of image and text-based inputs. Nonetheless, queries…

Read More

SILO AI Unveils Upcoming Viking Model Family: A Freely Available Language Model for Nordic languages, English, and Coding Languages.

Artificial intelligence (AI) continues to make significant strides forward with the development of Viking, a cutting-edge language model designed to cater to Nordic languages alongside English and a range of programming languages. Developed by Silo AI, Europe's largest private AI lab in partnership with the TurkuNLP research group at the University of Turku and HPLT,…

Read More

NAVER Cloud’s research team presents HyperCLOVA X: A Multilingual Language Model specially designed for the Korean language and culture.

The development of large language models (LLMs) has historically been English-centric. While this has often proved successful, it has struggled to capture the richness and diversity of global languages. This issue is particularly pronounced with languages such as Korean, which boasts unique linguistic structures and deep cultural contexts. Nevertheless, the field of artificial intelligence (AI)…

Read More

Scientists at Microsoft AI suggest LLM-ABR: A newly developed machine learning system that uses LLMs for the creation of adaptive bitrate (ABR) algorithms.

Large Language Models (LLMs) have become increasingly influential in many fields due to their ability to generate sophisticated text and code. Trained on extensive text databases, these models can translate user requests into code snippets, design specific functions, and even create whole projects from scratch. They have numerous applications, including generating heuristic greedy algorithms for…

Read More

Scientists from Intel Labs have unveiled LLaVA-Gemma, a compact vision-language module utilizing two versions of the Gemma Large Language Model, namely Gemma-2B and Gemma-7B.

Recent advancements in large language models (LLMs) and Multimodal Foundation Models (MMFMs) have sparked a surge of interest in large multimodal models (LMMs). LLMs and MMFMs, including models such as GPT-4 and LLaVA, have demonstrated exceptional performance in vision-language tasks, including Visual Question Answering and image captioning. However, these models also require high computational resources,…

Read More

Assessing AI Model Safety via Red Teaming Method: An In-depth Analysis of LLM and MLLM’s Resilience to Jailbreak Assaults and Prospective Enhancements

Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) are key advancements in artificial intelligence (AI) capable of generating text, interpreting images, and understanding complex multimodal inputs, mimicking human intelligence. However, concerns arise due to their potential misuse and vulnerabilities to jailbreak attacks, where malicious inputs trick the models into generating harmful or objectionable…

Read More

“AutoTRIZ: A Creative AI Instrument that Utilizes Extensive Language Models (LLMs) for the Automation and Improvement of the TRIZ (Innovative Problem-solving) Approach”

The Theory of Inventive Problem Solving (TRIZ) is a widely recognized method of ideation that uses the knowledge derived from a large, ongoing patent database to systematically invent and solve engineering problems. TRIZ is increasingly incorporating various aspects of machine learning and natural language processing to enhance its reasoning process. Now, researchers from both the Singapore…

Read More

Stanford University researchers have unveiled Octopus v2, a tool that enhances on-device language models for improved super agent operations.

Artificial intelligence, particularly large language models (LLMs), faces the critical challenge of balancing model performance and practical constraints such as privacy, cost, and device compatibility. Large cloud-based models that offer high-accuracy rely on constant internet connectivity, raising potential issues of privacy breaches and high costs. Deploying these models on edge devices introduces further challenges in…

Read More

Google DeepMind Introduces Mixture-of-Depths: Fine-Tuning Transformer Models for Adaptable Resource Management and Improved Computation Efficiency

The transformer model has become a crucial technical component in AI, transforming areas such as language processing and machine translation. Despite its success, a common criticism is its standard method of uniformly assigning computational resources across an input sequence, failing to acknowledge the varying computational demands of different parts of a data sequence. This simplified…

Read More