Title: The
Invisible Gorilla: How Human Bias Shapes the Artificial Intelligence We Create
Introduction
In a famous psychological
experiment, participants watched a video of people passing a basketball and
were asked to count the number of passes made by one team. During the video, a
person in a gorilla suit walks through the scene, pauses, beats their chest,
and leaves. Astonishingly, nearly half of the viewers failed to notice the
gorilla at all. This phenomenon, known as the "invisible gorilla,"
has become a powerful symbol of selective attention—our tendency to miss obvious information when
we're focused on something else.
What started as a cognitive
experiment now holds significant implications in the realm of technology,
especially in the development of artificial intelligence (AI).
https://www.vox.com/future-perfect/2023/3/29/23659874/ai-existential-risk-alignment-chatgpt-openphil
http://www.youtube.com/watch?v=v85t9HGGcMo
What Is Human
Bias and Why Does It Matter?
Cognitive biases are mental
shortcuts that help us make decisions quickly. While often useful, they can
also lead to systematic errors. Some of the most common include:
·
Confirmation
bias: favoring
information that confirms existing beliefs
·
Selective
attention:
focusing on one element and ignoring others
·
Halo
effect:
letting an overall impression influence specific judgments
These biases shape not only
how we perceive the world but also how we collect, interpret, and act on
information.
https://www.independent.co.uk/tech/ai-destroy-humanity-chatgpt-bard-b2447684.html
How Does This
Bias Transfer to AI?
AI systems are not
inherently biased. They learn from data—and that data comes from humans. If the
training data contains human biases (which it often does), the AI learns and
replicates them.
For example:
·
A
hiring algorithm may favor certain genders or schools if historical data is
biased
·
Facial
recognition software might perform poorly on darker skin tones if not trained
with diverse samples
·
A
financial model could overemphasize particular markets if the data reflects
biased market assumptions
Like the gorilla
experiment, the AI may "miss" key information—because we missed it
while feeding the system.
http://www.youtube.com/watch?v=SpYyV1XvNDg
The Real Risk:
Automated Decisions Based on Biased Perceptions
In today's world, AI
supports or even makes crucial decisions in finance, healthcare, law, and more.
Bias in these systems can lead to:
·
Misdiagnosed
patients due to unbalanced clinical data
·
Legal
recommendations skewed by limited case types
·
Credit
denials rooted in historic inequalities
AI doesn’t just amplify our
strengths—it also mirrors our weaknesses.
: https://hai.stanford.edu/research/alignment-problem
Towards a More
Responsible AI
To address these issues, we
must:
1.
Design
with diversity:
Ensure diverse, multidisciplinary teams build AI systems
2.
Audit
the data:
Continually evaluate and clean training datasets
3.
Educate
for awareness:
Teach bias literacy across industries
4.
Promote
transparency:
Make models explainable and interpretable
http://www.youtube.com/watch?v=Za4un-2Vx9M
Conclusion:
Seeing the Gorilla in the Age of AI
The invisible gorilla
teaches us a crucial lesson: just because something isn’t seen doesn’t mean it
isn’t there. In artificial intelligence, we must pay attention not only to what
systems can do but also to how and why they do it.
Understanding human bias is
a foundational step toward building more fair, inclusive, and responsible AI
technologies.

.png)
.png)

.png)
.png)





