Skip to content Skip to sidebar Skip to footer

Large Language Model

Introducing Tsinghua University’s GLM-4-9B-Chat-1M: A Remarkable Language Model Competing Against GPT 4V, Gemini Pro (focused on vision), Mistral and Llama 3 8B.

Tsinghua University's Knowledge Engineering Group (KEG) has introduced GLM-4 9B, an innovative, open-source language model that surpasses other models like GPT-4 and Gemini in different benchmark tests. Developed by the Tsinghua Deep Model (THUDM) team, GLM-4 9B signals an important development in the sphere of natural language processing. At its core, GLM-4 9B is a colossal…

Read More

The Skywork team announces the unveiling of Skywork-MoE, a highly efficient Mixture-of-Experts (MoE) model, which boasts 146 billion parameters, 16 experts, and 22 billion activated parameters.

The advancement of natural language processing (NLP) capabilities has been to a large extent, dependent on developing large language models (LLMs). Although these models deliver high performance, they also pose challenges due to their need for immense computational resources and related costs, making them hard to scale up without incurring substantial expenses. These challenges, therefore, create…

Read More

Snowflake Unveils Polaris Catalog: Enhancing Data Interoperability through the Integration of Open Source Apache Iceberg

Snowflake recently introduced the Polaris Catalog, a new open-source catalog for Apache Iceberg designed to boost data interoperability across multiple engines and cloud services. The release illustrates Snowflake's commitment to granting businesses more control, flexibility, and security in their data management. The data sector has grown increasingly fond of open-source file and table formats due to…

Read More

IEIT SYSTEMS introduces the updated version, Yuan 2.0-M32. This upgraded edition is a Bilingual Mixture of Expert MoE Language Model, which is fundamentally grounded on the Yuan 2.0. It also features an Attention Router.

A research team from IEIT Systems has recently developed a new model, Yuan 2.0-M32, which uses the Mixture of Experts (MoE) architecture. This complex model is built on the same foundation as the Yuan-2.0 2B, but with utilization of 32 experts, only two of whom are active at any given time, resulting in its unique…

Read More

AI-RAG Solutions: Hallucination-Free or Not? Stanford University Researchers Evaluate the Dependability of AI in Legal Research and Face Challenges with Illusions and Precision

Artificial Intelligence (AI) is increasingly being used in legal research and document drafting, aimed at improving efficiency and accuracy. However, concerns regarding the reliability of these tools persist, especially given the potential for the creation of false or misleading information, referred to as "hallucinations". This issue is of particular concern given the high-stakes nature of…

Read More

Navigational Guidance and Prejudices in LLMs: Maneuvering through the Complexities of Persona Representation

Language and Large Model (LLM) research has shifted focus to steerability and persona congruity with complexities, challenging previous research simply based on one-dimensional personas or multiple-choice formats. A persona's intricacy and its potential to multiply biases in LLM simulations when there's lack of alignment with typical demographic views is now recognized. A recent research by…

Read More