Broadly neutralizing antibodies (bNAbs) play a crucial role in fighting HIV-1, functioning by targeting the virus's envelope proteins which shows promise in reducing viral loads and preventing infection. However, identifying these antibodies is a complex process due to the virus's rapid mutation and evasion from the immune system. Only 255 bNAbs have been discovered, therefore…
The paper discusses the challenge of ensuring that large language models (LLMs) generate accurate, credible, and verifiable responses. This is difficult as the current methods often require assistance due to errors and hallucinations, which results in incorrect or misleading information. To address this, the researchers introduce a new verification framework to improve the accuracy and…
Large Language Models (LLMs), which have immense computational needs, have revolutionized a variety of artificial intelligence (AI) applications, yet the efficient delivery of multiple LLMs remains a challenge due to their computational requirements. Present methods, like spatial partitioning that designates different GPU groups for each LLM, need improvement as lack of concurrency leads to resource…
The Massachusetts Institute of Technology (MIT) has announced its plan to fund 16 research proposals dedicated to exploring the potential of generative Artificial Intelligence (AI). The funding process began last summer when MIT President Sally Kornbluth and Provost Cynthia Barnhart invited research papers that could provide robust policy guidelines, efficient roadmaps, and calls to action…
Artificial Intelligence (AI) has been making strides in the field of drug discovery, and DeepMind's AI model AlphaFold has made significant contributions. In 2020, AlphaFold managed to predict the structures of almost the entire human genome, a groundbreaking achievement that allows a better understanding of protein activity and their potential role in diseases. This is…
Reinforcement learning from human feedback (RLHF) is a technique that encourages artificial intelligence (AI) to generate high rewards by aligning large language models (LLMs) with a reward model based on human preferences. However, it is beset by several challenges, such as the limiting of fine-tuning processes to small datasets, the risk of AI exploiting flaws…
Reinforcement Learning from Human Feedback (RLHF) uses a reward model trained on human preferences to align large language models (LLMs) with the aim of optimizing rewards. Yet, there are issues such as the model becoming too specialized, the potential for the LLM to exploit flaws in the reward model, and a reduction in output variety.…
The intersecting potential of AI systems and high-performance computing (HPC) platforms is becoming increasingly apparent in the scientific research landscape. AI models like ChatGPT, developed on the basis of transformer architecture and with the ability to train on extensive amounts of internet-scale data, have laid the groundwork for significant scientific breakthroughs. These include black hole…
Artificial Intelligence (AI) has demonstrated transformative potential in scientific research, particularly when scalable AI systems are applied to high-performance computing (HPC) platforms. This necessitates the integration of large-scale computational resources with expansive datasets to tackle complex scientific problems.
AI models like ChatGPT serve as exemplars of this transformative potential. The success of these models can…
Artificial Intelligence (AI) systems are tested rigorously before their release to ensure they cannot be used for dangerous activities like bioterrorism or manipulation. Such safety measures are essential as powerful AI systems are coded to reject commands that may harm them, unlike less potent open-source models. However, researchers from UC Berkeley recently found that guaranteeing…
Machine learning pioneer Hugging Face has launched Transformers version 4.42, a meaningful update to its well-regarded machine-learning library. Significant enhancements include the introduction of several advanced models, improved tool and retrieval-augmented generation support, GGUF fine-tuning, and quantized KV cache incorporation among other enhancements.
The release features the addition of new models like Gemma 2, RT-DETR, InstructBlip,…
In response to a call from MIT President Sally Kornbluth and Provost Cynthia Barnhart, researchers have submitted 75 proposals addressing the use of generative AI. Due to the overwhelming response, a second call was issued, with 53 submissions. A selected 27 from the initial call, and 16 from the second have been granted seed funding.…