Skip to content Skip to sidebar Skip to footer

Generative AI

NVIDIA GTC 2024: Boost progression utilizing generative AI on AWS

AWS recently connected with over 18,000 in-person and 267,000 virtual attendees at NVIDIA GTC, a global AI conference held in March 2024 in San Jose, California. During the conference, AWS and NVIDIA celebrated their 13-year collaboration, highlighting their ongoing efforts to enable developers access to cutting-edge AI technology, including AWS’s pioneering role in offering NVIDIA…

Read More

Efficient document categorization employing the Amazon Titan Multimodal Embeddings Model.

Businesses can automate the processing of vast quantities of various document formats using intelligent document processing (IDP) solutions powered by AI. These solutions categorize and extract insights from documents, reducing costs and errors and allowing for scalability. A significant aspect of IDP systems is document categorization, which guides the next steps based on the document…

Read More

OpenAI suggests that releasing the Voice Engine could be fraught with risks.

OpenAI has announced that it has conducted small-scale testing of its new Voice Engine technology, a product capable of cloning a human voice from a single 15-second audio sample. The resulting voice can be used to convert text inputs into natural-sounding speech with emotive and realistic characters. However, despite the promising applications of the technology,…

Read More

Attain DevOps refinement with BMC AMI zAdviser Enterprise and Amazon Bedrock.

Software engineering performance significantly impacts the building of sturdy, stable applications. The community aims to imbibe the engineering principles typical of software development, including systematic approaches to design, testing, development, and maintenance. This requires the careful amalgamation of applications and metrics to warrant complete control, awareness, and accuracy. This could be achieved through practices such…

Read More

Effective ongoing pre-training of large language models for financial sectors.

Large language models (LLMs), like Meta's Llama and Amazon's Pythia, are generally trained on broad, domain-agnostic datasets. However, recent research indicates that incorporating domain-specific datasets into the training process can significantly enhance LLM performance. This principle was demonstrated by incorporating 51% domain-specific financial documents into the training data of the BloombergGPT model, which outperformed traditional…

Read More

Upstage’s solar models are now accessible through Amazon SageMaker JumpStart.

A recent blog co-authored by Hwalsuk Lee at Upstage announced that Upstage’s Solar foundation model is now available on Amazon SageMaker JumpStart. The Solar language model has been pre-trained, offering improved functionality across languages, domains, and tasks. The Solar Mini Chat and Solar Mini Chat – Quant models are now accessible via SageMaker JumpStart. Amazon SageMaker…

Read More

Gradient simplifies and makes LLM benchmarking more affordable by utilizing AWS Inferentia.

Measurement of large language models' (LLMs) performance is a crucial component of the fine-tuning and pre-training stages in the process prior to deployment. Frequent and rapid validation of their performance enhances the likelihood of improving the language model's performance. In partnership with Gradient, a service involved with the development of personalized LLMs, the challenge of…

Read More