Skip to content Skip to footer

Salesforce is pushing the boundaries in AI trends through the compact but powerful xLAM-1B and 7B models.

Enterprise software giant Salesforce has introduced two compact AI models, the 1-billion and 7-billion parameter xLAM models, which have notably challenged the prevalent “bigger is better” approach in artificial intelligence. Despite their smaller size, these models surpass many more extensive models on function-calling tasks, a process where AI interprets and translates natural language requests into specific function calls or API requests. Examples of such tasks include query processing in travel booking, real-time language translation, and customer relationship management (CRM).

The Salesforce team demonstrates that the performance of these compact models, trained with their precisely curated datasets, reaches state-of-the-art levels on the Berkeley Function-Calling Benchmark, even outperforming several GPT-4 instances. The 1-billion parameter model (xLAM-1B), colloquially known as the “Tiny Giant,” shows exceptional performance, surpassing Claude-3 Haiku and GPT-3.5-Turbo. The 7-billion parameter model (xLAM-7B) ranked 6th on the Function-Calling Leaderboard, beating GPT-4 and Gemini-1.5-Pro.

These successes become more significant when comparing the model sizes against their competitors: xLAM-1B has 1 billion parameters, xLAM-7B has 7 billion parameters, whereas GPT-3 has 175 billion parameters, GPT-4 has an estimated 1.7 trillion parameters. Claude-3 Opus and Gemini Ultra also sport an undisclosed, likely hundreds of billions of parameters. These results underscore that a well-designed model and high-quality training data can outperform size.

Salesforce has utilized an innovative training technique for creating datasets for function-calling tasks. Named APIGen, this pipeline creates diverse, high-quality function-calling datasets by sampling from a vast library of 3,673 executable APIs across 21 categories. The technique provides the AI with a plethora of realistic scenarios to learn from.

The potential applications of the xLAM models are vast. Salesforce can utilize them to enhance the customer relationship management (CRM) systems they develop. They can add advanced features to digital assistants, create more intelligent interfaces for smart home devices, streamline AI processing for autonomous vehicles, and facilitate real-time language translation on edge devices.

These models encourage revisiting and refining conventional AI training approaches and architecture. Salesforce CEO, Marc Benioff, explains that the impressive performance of “Tiny Giant” showcases the potential for “on-device agentic AI.” This refers to AI models so compact and efficient that they can operate on smartphones and Internet of Things (IoT) devices directly, without needing to connect to larger, more potent computing resources elsewhere.

Although Salesforce’s efforts challenge the increasingly larger AI models’ trend, the future of AI will likely include both bigger and smaller ones. What matters more is designing smarter, more efficient models capable of bringing sophisticated features to a broader range of devices and applications. Salesforce’s successful experiment with the impressive xLAM-1B and 7B suggests that the size of AI models will not remain the only, or primary, measure of their efficiency in the future.

Leave a comment

0.0/5