Multimodal Large Language Models (MLLMs), such as Flamingo, BLIP-2, LLaVA, and MiniGPT-4, enable emergent vision-language capabilities. Their limitation, however, lies in their inability to effectively recognize and understand intricate details in high-resolution images. To address this, scientists have developed InfiMM-HD, a new architecture specifically designed for processing images of varying resolutions at a lower computational…
