Skip to content Skip to footer

Understanding Bias in AI: Its Causes and Implications | An Article by Stefan Kojouharov | March 2024

The inevitability of bias in LLMs, or Language Models, is a topic discussed by Stefan Kojouharov on his Substack. He introduces the concept of bias by relating it to the human condition and attention. Humans, he says, have a limited amount of attention that we allocate based on our values. However, we do not have the time or resources to evaluate every single object in our environment. Therefore, we filter the world based on our values, which leads to familiarity and preference for what we value. This, in turn, results in bias.

Bias appears a priori; it occurs before we give our attention to something. Being aware of this bias can offer important insights into human cognition and decision-making – with Kojouharov’s embedded video featuring a passing drill by players in white serving as a practical demonstration. Understanding how bias forms is not just crucial for human psychology, but also that of Artificial Intelligence (AI), which is based on LLMs.

AI is essentially trained to sift through and learn from huge datasets. Bias in AI can occur when these datasets reflect biased human decisions. If an AI system is trained with these data, it will basically absorb and reproduce these biases. For instance, if the images used to teach an AI about human appearances mostly include people of a certain race, the system’s understanding of what people look like will be skewed.

However, even if we could theoretically expose an AI to every piece of information in the world to avoid this bias, this would involve an unrealistic amount of resources. Like humans, AIs also have to deal with a limited pool of attention – they can’t process every single detail, so they filter based on what they’ve been trained to value or prioritize.

In conclusion, the problem of bias in LLMs is closely tied to the limitations of our attention. The process of filtering based on values, whether in human cognition or AI programming, leads inevitably to bias. Understanding how and why this bias occurs is a key step in ensuring we properly navigate it, both in our personal interactions and in the systems we design. As such, this unavoidable bias underscores the need for diverse datasets and careful scrutiny in AI development to mitigate its potential drawbacks.

Leave a comment

0.0/5