The Galileo Luna is a transformative tool in the evaluation of language model processes, specifically addressing the prevalence of hallucinations in large language models (LLMs). Hallucinations refer to situations where models generate information that isn’t specific to a retrieved context, a significant challenge when deploying language models in industry applications. Galileo Luna combats this issue by utilizing a purpose-built evaluation foundation model (EFM), ensuring high accuracy, low latency, and cost efficiency in detecting and mitigating hallucinations.
Predominantly, large language models have revolutionized natural language processing due to their ability to produce human-like text. However, these models often produce factually incorrect information – hallucinations – detracting from their reliability in critical applications such as customer support, legal advice, and biomedical research. Hallucinations are not random; they can stem from various factors including outdated knowledge bases, issues with response generation, faulty training data, and the mishandling of new knowledge during fine-tuning.
To tackle such issues, Retrieval-augmented generation (RAG) systems have been developed to incorporate relevant knowledge into the models’ responses. Despite these technologies, existing hallucination detection procedures often neglect the balance between accuracy, latency, and cost, rendering them unsuitable for real-time, large-scale industry applications.
This is where Luna, the Evaluation Foundation model introduced by Galileo Technologies, comes into play. Luna is a DeBERTa-large encoder fine-tuned to detect hallucinations in RAG practices with an architecture built upon a 440-million-parameter DeBERTa-large model. This model is constructed with real-world RAG data to generalize across multiple industry domains and handle long-context RAG inputs. Galileo Luna surpasses existing models, including GPT-3.5, in performance and efficiency, boasting high accuracy, low cost, and millisecond-level inference speed.
Notably, Luna provides several breakthroughs in GenAI Evaluations. Firstly, Luna is 18% more accurate than GPT-3.5 in detecting hallucinations in RAG-based systems, thus providing superior evaluation accuracy benchmarks. Moreover, Luna reduces evaluation costs by 97%, signifying a low-cost evaluation. The model also offers ultra-low latency evaluation, functioning 11 times faster than GPT-3.5, producing evaluations in milliseconds. Without demanding a ground truth test set, Luna ensures hallucination detection, security, and data privacy. Lastly, Luna supports customizability, enabling ultra-high accuracy custom evaluation models within minutes.
In terms of performance and cost-efficiency, Luna surpasses other models by achieving a 97% cost reduction and a 91% reduction in latency. Luna’s lightweight architecture allows local GPU deployment, ensuring data privacy and security, a significant advantage over API-based third-party solutions. Its ability to quickly process thousands of tokens makes it suitable for real-time applications like customer support and chatbots.
Luna can be customized to meet specific industry needs, enabling high degrees of flexibility. For instance, in the pharmaceutical sector where hallucinations can have serious consequences, Luna can be adjusted to detect particular hallucination classes with over 95% accuracy. Along with hallucination detection, Luna supports a range of evaluation tasks including context adherence, chunk utilization, context relevance, and security checks.
In conclusion, Galileo Luna introduces a remarkable new era in the evaluation of models for large language systems. It addresses the key issue of hallucinations in LLMs, providing a reliable and efficient tool for accuracy, cost-efficiency, and highly responsive performance in AI-driven applications. By doing so, Luna sets the groundwork for future, robust, and dependably language models in various industry applications.