top of page
Search
  • Writer's pictureAbigail Jacobs

Artificial Intelligence Bias

Updated: Apr 16, 2021

“Hey Siri, you’re a bitch.”

“I'd blush if I could”


The UNESCO report "I'd blush if I could", published on May 22, 2019, criticized most AI voice assistants for gender bias. Most of them showed submissive and humble characteristics in female voices, reflecting that artificial intelligence products' logic contained gender biases. Although the AI software that powers Siri has been updated to reply to the insult more flatly ("I don't know how to respond to that"), the issue of AI working with prejudice is emerging to the public.


In recent years, artificial intelligence (AI) technology has developed rapidly and is widely used in daily life. Many people believe that AI is fairer than humans because machines only make decisions using logic and numbers, without emotions, preferences, or discrimination. It lacks human decision-making, which is unpredictable. But in fact, AI’s bias may be more serious than humans. People value the high efficiency, low cost, and applicability of AI, but discrimination will be unintentionally magnified when people use AI extensively.






“While many tech companies have introduced a male voice to their system, the option is usually buried in settings and the male scripts are different from the default female ones, reports UNESCO: ‘The male versions tend to use more definitive quantifiers (one, five), while the female versions use more general quantifiers (a few, some), as well as more personal pronouns (I, you, she).’”

——Courtesy of UNESCO


Here is an example of how AI develops by different tech companies. The UNESCO report points out that the reasons for the trend of communication rooted in gendered stereotypes are: first, that these industries are mostly male-dominated, with less women employees, and second, that gender stereotypes cause developers (predominantly males) to think that users find female voices more appealing. This reveals that these researchers experience biases from gender stereotypes, which are reflected in the behavior of AI.





Moreover, Amazon has used an experimental hiring tool to let AI give job candidates scores ranging from one to five stars. In 2015, the company realized that the system did not rate candidates for software developer positions in a gender-neutral way. That's because Amazon's computer models are trained to screen applicants by looking at patterns in resumes submitted to the company over a decade. Most were men, reflecting the dominance of men in the tech industry as a whole. Hence, AI tended to show preference on choosing male employees rather than female. Amazon later found out about the problem and did not continue to develop the system. This event shows how human bias affects how AI works, whether fairly or prejudicially. Although AI cannot think or feel emotions, it can find information with recurring patterns. The "prejudice" is the most common phenomenon picked up by the machine from the data. This prejudice in AI systems is only an honest reflection of the actual prejudice in society.


Sample bias is affecting AI and skewing calculations to produce biased results. This is not a new problem, but it needs to be addressed systematically. Although prejudice may seem to be ingrained into everything manmade, this can be overcome. Similar problems like machine learning biases have been overcome, and with effort and research, superior AI can be developed with less prejudice and bias.


It depends on us.


If you are interested, here is a website you can hear a genderless voice, click here


Critical Essay written by Xinyan Li

32 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page