
Impacts of Artificial Intelligence (AI) on People of African Descent
By Teboho Mosebo/ GICJ
The impact of Artificial Intelligence (AI) on People of African Descent is a pressing concern. United Nations Human Rights Chief Volker Turk noted in his vision statement 'Human Rights: A path for solutions' that while generative AI offers unprecedented opportunities, its negative societal impacts are already widespread.
Challenges and Concerns
Recent advancements in generative AI and its increasing application raise significant human rights concerns. As AI increasingly influences critical aspects of modern life, it also perpetuates stereotypes and exacerbates racial disparities. This occurs due to the underrepresentation or misinterpretation of African Descendants in the datasets that inform AI systems.
Ongoing global dialogues aim to address the far-reaching impacts of Artificial Intelligence. From 14-17 April 2025, the United Nations Permanent Forum on People of African Descent held its fourth session, focusing on reparations and AI challenges, with the theme 'Africa and People of African Descent: United for Reparatory Justice in the Age of AI'.
During the 56th Session of the Human Rights Council, Ms. Ashwini K.P, the UN Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia, and Related Intolerance, presented her report entitled A/HRC/56/68 . The report highlighted key challenges and impacts of AI, including data issues, where datasets used in algorithms often lack completeness or underrepresent certain groups, particularly people of African Descent.
The lack of diversity in the digital technology sector is further compounded by inadequate inclusive consultation processes during AI system development. Reports indicate law enforcement agencies are using AI in ways that perpetuate racial discrimination, such as targeted surveillance and over-policing. Furthermore, using variables like socio-economic background, education, and location in AI systems can proxy for race, thereby perpetuating historical biases [1].
Expert Insights
Mutale Nkonde, CEO of AI for the People, highlighted concerns about facial recognition systems. These systems, commonly used at borders and in law enforcement, have been shown to misidentify people with darker skin tones more frequently. The implications for migration and asylum processes are significant.
Nkonde also pointed out issues with speech recognition, where AI often flags African American Vernacular English as toxic, leading to biased content moderation outcomes. Consequently, Black creators face reduced visibility and engagement, limiting their ability to reach their target audiences. This systemic bias effectively marginalizes Black social media users from the global content creation economy, valued at approximately $250 billion according to Goldman Sachs estimates. 2
Case Studies
The implications of AI biases are far-reaching. In Brazil's state of Ceará, the police department's facial recognition system faced criticism after misidentifying Michael B. Jordan as a suspect in a mass shooting case that left five dead on Christmas Eve 2021. In 2022, the system came under fire when Jordan's photo appeared on the police's wanted list. The police department's software failed to properly distinguish between Black faces, leading to the misidentification of the Hollywood star [3].
In the USA, law enforcement agencies often utilise algorithms developed by companies like Amazon, Clearview AI, and Microsoft. Federal testing has revealed that most facial recognition algorithms perform poorly when identifying individuals who are not white men. Civil rights advocates caution that the technology's difficulty in distinguishing darker faces could exacerbate racial profiling and increase false arrests [4].
Solutions
To address these challenges, it is crucial to implement inclusive AI development practices and implement the UN mechanisms' recommendations. For example, the Committee on the Elimination of Racial Discrimination “General Recommendation 36” emphasises the importance of preventing racial disparities in AI applications and underscores the need for transparency, accountability and human rights due diligence to mitigate the adverse impacts of algorithm bias. On the other hand, several UN mechanisms have highlighted the six principles of the human rights-based approach to data- namely participation, data disaggregation, self-identification, transparency, privacy and accountability should be used ethically, effectively and equitably.
By adopting these solutions, we can work towards creating more equitable AI systems that respect the rights and dignity of People of African Descents.