Skip to content Skip to footer

Machine learning (ML) models are increasingly used by organizations to allocate scarce resources or opportunities, such as for job screening or determining priority for kidney transplant patients. To avoid bias in a model’s predictions, users may adjust the data features or calibrate the model’s scores to ensure fairness. However, researchers at MIT and Northeastern University argue that these methods are insufficient in addressing inherent uncertainties and structural injustices. In a recently published paper, they propose introducing structured randomization into an ML model’s decision-making to improve fairness.

This approach is especially beneficial when ML models make decisions under uncertainty, or when specific groups consistently receive negative outcomes. By integrating a certain amount of randomization through a weighted lottery system, this method can increase fairness without negatively impacting a model’s efficiency or accuracy.

The researchers explained this concept by using an example of allocating job interview opportunities. If multiple firms use the same ML model to rank candidates, a worthy individual might always be ranked at the bottom due to the model’s deterministic approach. By introducing randomization into the system, this would prevent a deserving individual from always being denied an opportunity.

The research built on a previous paper that revealed harms could occur when deterministic systems are used at a large scale. Such systems can amplify existing inequalities embedded in training data, leading to reinforcement of bias and systemic inequality.

According to the researchers, a deterministic allocation that always grants resources to one with a stronger claim could lead to systemic exclusion or worsen patterned inequality, which refers to the increased likelihood of receiving future allocations. Randomization can rectify these issues. However, not all decisions should be equally randomized.

The researchers suggest a structured randomization, where the amount of randomity depends on the level of uncertainty in the model’s decision-making process. Decisions with higher uncertainty should incorporate more randomization. Using this method, the researchers showed that calibrated randomization can result in fairer outcomes without significantly impacting a model’s overall effectiveness.

The research team also noted that some cases, like in criminal justice contexts, might not benefit from randomization. They plan to further study other potential use cases and investigate how randomization can influence other factors, such as competition or prices, and enhance the robustness of ML models.

Leave a comment

0.0/5