Researchers from Stony Brook University, the US Naval Academy, and the University of Texas at Austin have developed CAT-BENCH, a benchmark to assess language models' ability to predict the sequence of steps in cooking recipes. The research's main focus was on how language models comprehend plans by examining their understanding of the temporal sequencing of…
Artificial Intelligence (AI) innovations continue to pose particular challenges to existing legal frameworks, particularly in the realm of assigning liability due to the lack of discernible intentions, which is traditionally important in establishing liability. This problem is addressed in a new paper from Yale Law School, which suggests employing objective standards in regulating AI.
By viewing…
Two AI, a new startup in the artificial intelligence (AI) space, has launched SUTRA, an innovative language model capable of proficiency in over 30 languages. It includes many South Asian languages such as Gujarati, Marathi, Tamil, and Telugu, aiming to address the unique linguistic challenges and opportunities in South Asia.
Constructed by using two mixture-of-experts transformers…
Large language models (LLMs), instrumental in natural language processing tasks like translation, summarization, and text generation, face challenges in consistently adhering to logical constraints during text generation. This adherence is crucial in sensitive applications where precision and instruction compliance are crucial. Traditional methods for imposing constraints on LLMs, such as the GeLaTo framework, have limitations…
Large language models (LLMs) are central to the field of natural language processing, being utilized in tasks like translation, summarization, and creative text generation. They utilize extensive data to learn patterns and relationships in languages, enabling them to undertake tasks necessitating an understanding of context, syntax, and semantics. However, there's a persistent challenge in ensuring…
Researchers from various international institutions have developed a computational method called RAIN to rapidly identify broadly neutralizing antibodies (bNAbs) against HIV-1. bNAbs can target the virus's envelope proteins to reduce viral loads and stop infection, but the process of discovering them is an arduous one due to the need for B-cell isolation and next-generation sequencing,…
Broadly neutralizing antibodies (bNAbs) play a crucial role in fighting HIV-1, functioning by targeting the virus's envelope proteins which shows promise in reducing viral loads and preventing infection. However, identifying these antibodies is a complex process due to the virus's rapid mutation and evasion from the immune system. Only 255 bNAbs have been discovered, therefore…
The paper discusses the challenge of ensuring that large language models (LLMs) generate accurate, credible, and verifiable responses. This is difficult as the current methods often require assistance due to errors and hallucinations, which results in incorrect or misleading information. To address this, the researchers introduce a new verification framework to improve the accuracy and…
Large Language Models (LLMs), which have immense computational needs, have revolutionized a variety of artificial intelligence (AI) applications, yet the efficient delivery of multiple LLMs remains a challenge due to their computational requirements. Present methods, like spatial partitioning that designates different GPU groups for each LLM, need improvement as lack of concurrency leads to resource…
Artificial Intelligence (AI) has been making strides in the field of drug discovery, and DeepMind's AI model AlphaFold has made significant contributions. In 2020, AlphaFold managed to predict the structures of almost the entire human genome, a groundbreaking achievement that allows a better understanding of protein activity and their potential role in diseases. This is…
Reinforcement learning from human feedback (RLHF) is a technique that encourages artificial intelligence (AI) to generate high rewards by aligning large language models (LLMs) with a reward model based on human preferences. However, it is beset by several challenges, such as the limiting of fine-tuning processes to small datasets, the risk of AI exploiting flaws…
Reinforcement Learning from Human Feedback (RLHF) uses a reward model trained on human preferences to align large language models (LLMs) with the aim of optimizing rewards. Yet, there are issues such as the model becoming too specialized, the potential for the LLM to exploit flaws in the reward model, and a reduction in output variety.…