How Technology Represents Us
Introduction
- Artificial intelligence technology uses algorithms to make predictions about people
- However of concern is the way these technologies enforce existing social biases by being discriminatory based on race, gender, religion in the way they are designed.
- The technology has, for example, been embedded into the personal digital assistant devices like Alexa and Siri.
Thesis: There is significant unconscious bias in the development of machine learning algorithms which further entrenches unfavourable social stereotypes.
Theoretical framework
- According to Benjamin, technological tools such as algorithms are designed with deep discrimination while appearing to be benevolent and neutral (Benjamin, 2019). Benjamin argues that the discrimination further entrenches social divisions rather than fixing them
The biases, values, and ideas embedded in this technology
Health sector bias
- Machine learning algorithms in the health sector perpetuate unconscious bias in the way they predict healthcare risks and financial capabilities of people from different racial and ethnic compositions. For example, black patients’ costs are underestimated because they generally spend less money on healthcare
Search engine findings
- AI algorithms used in Google were criticized for showing pornographic images when one searched “Black girls” as opposed to when on searched “white girls”. The black girls were naked while white girls appeared dressed.
Don't use plagiarised sources.Get your custom essay just from $11/page
Speech recognition bias
- The voices used in apps such as Alexa and Siri are female characters using American midland English (Romano, 2019). This leaves out people who have strong ethnic accents from ethnic minorities and other people from different cultures.
Gender-based bias
- The female characters are mostly used in the PDAs to reinforce the idea that women are obliging and eager pleasers than men. This is because everything, including insults, is agreed with through responses such as “Ok” and “thank you”.
- The information posted by African Americans or Latinos was one and a half more likely to be flagged by Facebook’s algorithms when compared to posts by Whites (Hsu, 2019).
Racial discrimination in job searches and law enforcement
- Artificial intelligence is being relied on in workplace job searches (Benjamin, 2019). However, minority groups are perceived more negatively than white prospects by the computer algorithms through apps and software.
Racial discrimination in law enforcement
- In a study, 60% of black names were aligned by the algorithm with terms such as “criminal” when compared to 48% of white names.
- AI is used to predict and advise police officers to be deployed in a particular neighbourhood. (Kochi, 2018)
- This reduces the chances of fair dealings with law enforcement or in job searches where AI is extensively used
Religious discrimination
- Artificial intelligence has been controversially associated with the depiction of sensitive images and phrases that do not align with the developer aligned Christian majority (Cecco, 2019).
- People from majority Muslim countries are predicted to be associated with terrorism which further infringes human rights
Conclusion
There is a need to develop artificial intelligence and machine learning while factoring in the needs of the wider population to heal divisions and eliminate negative social stereotypes.
References
Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity.
Cecco, L. (2019, September 27). Toronto van attack suspect says he was ‘radicalized’ online by ‘incels’. the Guardian. https://www.theguardian.com/world/2019/sep/27/alek-minassian-toronto-van-attack-interview-incels
Hsu, T. (2019, June 17). These Influencers Aren’t Flesh and Blood, Yet Millions Follow Them. The New York Times – Breaking News, World News & Multimedia. https://www.nytimes.com/2019/06/17/business/media/miquela-virtual-influencer.html
Kochi, E. (2018, March 15). AI is already learning how to discriminate. Quartz at Work. https://qz.com/work/1227982/ai-and-discrimination-what-tech-companies-can-do/
Romano, A. (2019, October 10). A group of YouTubers is claiming the site systematically demonetizes queer content. Vox. https://www.vox.com/culture/2019/10/10/20893258/youhttps://www.vox.com/culture/2019/10/10/20893258/youtube-lgbtq-censorship-demonetization-nerd-city-algorithm-reporttube-lgbtq-censorship-demonetization-nerd-city-algorithm-report