Large Language Models (LLMs) and Large Multi-modal Models (LMMs) are effective across various domains and tasks, but scaling up these models comes with significant computational costs and inference speed limitations. Sparse Mixtures of Experts (SMoE) can help to overcome these challenges by enabling model scalability while reducing computational costs. However, SMoE struggles with low expert…
