Skip to content Skip to footer

The Jamba-Instruct model from AI21 Labs is now accessible on Amazon Bedrock.

AI21 Labs has made its Jamba-Instruct large language model (LLM) available in Amazon Bedrock. Among its remarkable attributes, Jamba-Instruct supports a 256,000-token context window, making it suitable for handling large documents and complex Retrieval Augmented Generation (RAG) applications.

This language model is the instructional version of the Jamba base model. It merges Structured State Space (SSM) technology and Transformer architecture, resulting in top-tier performance and superior capabilities compared to AI21’s prior models. The unique SSM approach allows Jamba-Instruct to possess the longest context window length in its classification, making it perfect for processing large volumes of information without compromising on performance.

To use Jamba-Instruct within Amazon Bedrock, one must first access the model via the Amazon Bedrock console and choose to modify model access. One can then select the AI21 Labs models they wish to use.

Jamba-Instruct is significantly useful for complex RAG operations and critical document analysis. For example, it can be employed to find contradictions between different documents or comprehend one document in light of another. It can also be used for query augmentation, where an original query is translated into related queries to optimize RAG applications. Further, Jamba-Instruct is also useful for standard LLM operations such as entity extraction and summarization.

Users can directly access this model through an API using Amazon Bedrock and AWS SDK for Python (Boto3). Refer to the given code snippet and quickstart documentation for more guidance.

Jamba-Instruct is greatly advantageous in applications that necessitate a long context window, such as answering questions grounded in extensive documents or producing summaries. Thanks to the novel SSM/Transformer hybrid architecture, the model provides enhanced throughput and three times more tokens per second for context window lengths covering over 128,000 tokens, compared to similar-sized models.

Presently, Jamba-Instruct in Amazon Bedrock is available in the US East (N. Virginia) AWS Region. It can be procured on-demand. For more details, one can refer to the relevant sources suggested in this piece, or they can visit Amazon Bedrock’s console to get started with this language model.

About the authors of this significant announcement, Joshua Broyde, Ph.D., is a Principal Solution Architect at AI21 Labs. His role involves working with customers and AI21 partners to implement generative AI on an enterprise level. He particularly focuses on using complex LLM workflows for regulated and specialized environments. Fernando Espigares Caballero, a Senior Partner Solutions Architect at AWS, creates solutions with strategic Technology Partners to deliver value to customers. He has an extensive experience of over 25 years in IT platforms. Currently focusing on generative AI, Fernando is devoted to innovating and constructing new solutions that address specific customer requirements.

Leave a comment

0.0/5