Skip to content Skip to footer

Companies like FedEx find the task of efficiently routing holiday packages massively complex, often requiring the use of specialized software to find a solution. This software, called a mixed-integer linear programming (MILP) solver, is used to break down large optimization problems into smaller bits and find the best solution using algorithms. However, this process can be time-consuming, sometimes taking days to complete.

Researchers from MIT and ETH Zurich have found a way to speed up this process using machine learning. They identified a key intermediate step in the MILP solvers that slows down the process due to the sheer number of potential solutions. Using a filtering technique, they simplified this step and then used machine learning to find the most optimal solution for a specific type of problem.

This model uses a company’s own data to tailor a general-purpose MILP solver to the problem at hand. It has been able to speed up MILP solvers between 30 and 70 percent, without any loss in accuracy. This new approach can be applied in any situation where a MILP solver is used, such as ride-hailing services, electric grid operators, vaccination distributors and others facing similar resource-allocation problems.

MILP problems have an exponential number of potential solutions, which makes the problem called NP-hard (difficult to solve). A typical solver uses a divide-and-conquer approach, first splitting the space of potential solutions into smaller pieces using a technique called branching. Then, it uses a technique called cutting to tighten up these smaller pieces for faster search. However, identifying the ideal combination of separator (cutting) algorithms is a problem with an exponential number of solutions itself.

The researchers came up with a filtering mechanism that reduces this separator search space from over 130,000 potential combinations to about 20 options. From these 20 options, a machine-learning model picks the best combination of algorithms based on the user’s specific optimization problem. The model keeps learning and improving with each iteration, which is known as contextual bandits, a form of reinforcement learning. This approach has not only sped up the MILP solvers but also maintained the same level of accuracy.

In the future, the researchers are looking to apply this methodology to even more complex MILP problems and work on interpreting the learned model better to understand the effectiveness of different separator algorithms. This research is supported by Mathworks, the National Science Foundation, the MIT Amazon Science Hub, and MIT’s Research Support Committee.

Leave a comment

0.0/5