When people talk about bias in AI, they often mean different things. Here we focus on social bias โ systematic unfairness toward certain groups.
Where bias enters
Training data. If a face recognition system is trained mostly on light-skinned faces, it will perform worse on dark-skinned faces.
Label bias. If the people labeling data have systematic prejudices, those get encoded.
Feedback loops. A biased recommendation system surfaces certain content more, generating more data, reinforcing the bias.
What you can do
Audit your training data. Evaluate disaggregated metrics. Use fairness-aware training objectives.