Skip to content Skip to footer

OuteAI Introduces Innovative Lite-Oute-1 Variants: Lite-Oute-1-300M and Lite-Oute-1-65M as Robust Yet Space-Saving AI Platforms.

OuteAI has released two new models of its Lite series, namely Lite-Oute-1-300M and Lite-Oute-1-65M, which are designed to maintain optimum efficiency and performance, making them suitable for deployment across various devices. The Lite-Oute-1-300M model is based on the Mistral architecture and features 300 million parameters, while the Lite-Oute-1-65M, based on the LLaMA architecture, hosts around 65 million parameters.

The 300M model is a size-enhanced version of its previous versions and has been trained on a more refined dataset, all of which leads to better context retention and coherence. The model was trained on 30 billion tokens with a context length of 4096, assuring robust language processing capabilities. It is available in different versions including Lite-Oute-1-300M-Instruct, Lite-Oute-1-300M-Instruct-GGUF, Lite-Oute-1-300M (Base), and Lite-Oute-1-300M-GGUF.

Lite-Oute-1-300M’s performance was benchmarked across multiple tasks including ARC Challenge and ARC Easy, CommonsenseQA, HellaSWAG, MMLU, OpenBookQA, PIQA, and Winogrande, showcasing improved problem-solving abilities. The 300M model can be implemented easily using the HuggingFace’s transformers library in Python.

Meanwhile, the Lite-Oute-1-65M model serves as OuteAI’s exploration into ultra-compact models. Although it does display basic language understanding and text generation capabilities, the 65M model has acknowledged limitations in instructions or maintaining topic coherence due to its small size. It’s available in versions such as Lite-Oute-1-65M-Instruct, Lite-Oute-1-65M-Instruct-GGUF, Lite-Oute-1-65M (Base), and Lite-Oute-1-65M-GGUF.

Both models, the Lite-Oute-1-300M and Lite-Oute-1-65M, were trained on NVIDIA RTX 4090 hardware with the former trained on 30 billion tokens and the latter on 8 billion tokens. The release of these models critical to OuteAI signifies an effort to strike a balance between size, capability, and efficiency, expanding their applications while also improving performance on various devices.

Leave a comment

0.0/5