DeepLearning.AI has rolled out fifteen short artificial intelligence (AI) courses, aiming to enhance students' proficiency in AI and generative AI technologies. The training duration isn't specified, but the depth and breadth of the curriculum cater significantly to AI beginners and intermediates.
Following is the description of these courses:
1. Red Teaming LLM Applications: It covers enhancing LLM…
Mobile applications play a crucial role in day-to-day life, however, the diversity and intricacy of mobile UIs can often pose challenges in terms of accessibility and user-friendliness. Many models struggle to decode the unique aspects of UIs, such as elongated aspect ratios and densely packed elements, creating a demand for specialized models that can interpret…
Speech synthesis—the technological process of creating artificial speech—is no longer a sci-fi fantasy but a rapidly evolving reality. As interactions with digital assistants and conversational agents become commonplace in our daily lives, the demand for synthesized speech that accurately mimics natural human speech has escalated. The main challenge isn't simply to create speech that sounds…
In the field of Artificial Intelligence (AI), "zero-shot" capabilities refer to the ability of an AI system to recognize any object, comprehend any text, and generate realistic images without being explicitly trained on those concepts. Companies like Google and OpenAI have made advances in multi-modal AI models such as CLIP and DALL-E, which perform well…
To improve the planning and problem-solving capabilities of language models, researchers from Stanford University, MIT, and Harvey Mudd have introduced a method called Stream of Search (SoS). This method trains language models on search sequences represented as serialized strings. It essentially presents these models with a set of problems and solutions in the language they…
Language models (LMs) are a crucial segment of artificial intelligence and can play a key role in complex decision-making, planning, and reasoning. However, despite LMs having the capacity to learn and improve, their training often lacks exposure to effective learning from mistakes. Several models also face difficulties in planning and anticipating the consequences of their…
The complexity of mathematical reasoning in large language models (LLMs) often exceed the capabilities of existing evaluation methods. These models are crucial for problem-solving and decision-making, particularly in the field of artificial intelligence (AI). Yet the primary method of evaluation – comparing the final LLM result to a ground truth and then calculating overall accuracy…
Generative language models in the field of natural language processing (NLP) have fuelled significant progression, largely due to the availability of a vast amount of web-scale textual data. Such models can analyze and learn complex linguistic structures and patterns, which are subsequently used for various tasks. However, successful implementation of these models depends heavily on…
Meta has developed a machine-learning (ML) model to improve the efficiency and reliability of real-time communication (RTC) across its various apps. Developing this ML-based solution is an answer to the limitations of existing bandwidth estimation (BWE) and congestion control methods, such as the Google Congestion Controller (GCC) used in WebRTC, which relies on hand-tuned parameters…