Skip to content Skip to sidebar Skip to footer

Technical How-to

Experimenting with Large-scale Machine Learning using Amazon SageMaker Pipelines and MLflow

This post explains how large language models (LLMs) can be fine-tuned to better adapt to specific domains or tasks, using Amazon SageMaker and MLflow. When working with LLMs, customers may have varied requirements such as choosing a suitable pre-trained foundation model (FM) or customizing an existing model for a specific task. Using Amazon SageMaker with…

Read More

Obtain valuable and implementable business details from AWS through the utilization of Amazon Q Business.

Amazon Web Services (AWS) has developed a solution using the AWS generative artificial intelligence (AI) service, Amazon Q Business, to provide actionable insights based on support data, which enhances the operation and health of AWS environments. Amazon Q Business is a generative AI-powered enterprise chat assistant that enables natural language interactions with an organization's data.…

Read More

Extract significant and practical operational understandings from AWS utilizing Amazon Q Business.

Amazon Web Services (AWS) has revealed a new automated solution that leverages artificial intelligence (AI) to synthesize insights and recommendations for its customers. By using AWS's AI services like Amazon Q Business, customers can gain actionable insights based on common patterns, issues, and resolutions from AWS Support cases, AWS Trusted Advisor, and AWS Health data. Amazon…

Read More

Customizing models in Amazon Bedrock through automation using AWS Step Functions workflow.

Amazon Web Services (AWS) recently introduced support for customizing models in Amazon Bedrock, a fully managed service for AI applications. The key role of Amazon Bedrock is to provide high-performing foundation language models from leading AI firms like Cohere and Meta. It aids businesses in using their proprietary data to pre-train their models according to…

Read More

Enhance the precision of RAG using meticulously adjusted embedding models on Amazon SageMaker.

Retrieval Augmented Generation (RAG) enhances the performance of large language models (LLMs) by incorporating extra knowledge from an external data source, which wasn't involved in the original model training. The two main components of RAG include indexing and retrieval. Despite their merits, pre-trained embeddings models, trained on generic datasets like Wikipedia, often struggle to effectively portray…

Read More

Utilizing Agents for Amazon Bedrock to interactively produce infrastructure as a code.

In the evolving landscape of cloud infrastructure, Agents for Amazon Bedrock stands as a promising tool for enhancing infrastructure as code (IaC) processes. This platform employs artificial intelligence to automate the triggering and orchestration of user-requested tasks, augmenting them with company-specific information. The process involves analyzing cloud architecture diagrams, which are then converted into Terraform…

Read More