Large Language Models (LLMs) such as GPT-3 and Llama face significant inefficiencies during large-scale training due to hardware failures and network congestion. These issues can lead to a substantial waste of GPU resources and extended training durations. Existing methods to address these challenges, which involve basic fault tolerance and traffic management strategies, are often inefficient…
