Scientists from The Hong Kong University of Science and Technology, and the University of Illinois Urbana-Champaign, have presented ScaleBiO, a unique bilevel optimization (BO) method that can scale up to 34B large language models (LLMs) on data reweighting tasks. The method relies on memory-efficient training technique called LISA and utilizes eight A40 GPUs.
BO is attracting…
Researchers from Shanghai Jiaotong University, Shanghai AI Laboratory, and Nanyang Technological University's S-Lab have developed an advanced multi-modal large language model (MLLM) called MG-LLaVA. This new model aims to overcome the limitations of current MLLMs when interpreting low-resolution images.
The main challenge with existing MLLMs has been their reliance on low-resolution inputs which compromises their…
Large Language Models (LLMs) have demonstrated impressive performances in numerous tasks, particularly classification tasks, in recent years. They exhibit a high degree of accuracy when provided with the correct answers or "gold labels". However, if the right answer is deliberately left out, these models tend to select an option from the available choices, even when…
Language models have become increasingly complex, posing a unique challenge to interpret their inner workings. To mitigate this issue, research has shifted towards the concept of mechanistic interpretability, where the focus is on identifying and analyzing 'circuits'. These circuits refer to sparse computational subgraphs that encapsulate certain aspects of the model's behavior.
The existing methodologies for…
Mental illness constitutes a critical public health issue globally with one in eight people affected and many lacking access to adequate treatment. Mental health professional training often contends with a significant difficulty: the disconnection between formal education and real-world patient interactions. A potential solution to this problem might lay in the use of Large Language…
The world of computer vision and graphics is constantly seeking the perfection of 3D reconstruction from 2D image inputs. Neural Radiance Fields (NeRFs), while effective at rendering photorealistic views from new perspectives, fall short in reconstructing 3D scenes from 2D projections, an important feature for augmented reality (AR), virtual reality (VR) and robotic perception. Traditional…
Researchers focused on Multimodal Large Language Models (MLLMs) are striving to enhance AI's reasoning capabilities by integrating visual and textual data. Even though these models can interpret complex information from diverse sources such as images and text, they often struggle with complicated mathematical problems that contain visual content. To solve this issue, researchers are working…
Natural language processing (NLP) is an artificial intelligence field focused on the interaction between humans and computers using natural human language. It aims to create models that understand, interpret, and generate human language, thereby enabling human-computer interactions. Applications of NLP range from language translation to sentiment analysis and conversational agents. However, despite advancements, language models…
Deep learning models such as Convolutional Neural Networks (CNNs) and Vision Transformers have seen vast success in visual tasks like image classification, object detection, and semantic segmentation. However, their ability to accommodate different data changes, particularly in security-critical applications, is a significant concern. Many studies have assessed the robustness of CNNs and Transformers against common…