AI's Values and Ethics: The Questions We Aren't Asking
Artificial intelligence is going to change the world. And for the better.
Or at least that is the bold refrain many AI-based startups pitch in their marketing materials.
But will AI actually tilt the world towards more inclusivity and equality?
Right now the uses of AI often fall into the gimmicky category. It is easy to laugh at and dismiss as quixotic. But what happens if‚Ää‚Äî‚Ääor more accurately, when‚Ää‚Äî‚ÄäAI starts playing a role in more substantive ways. For example, when AI plays a role in who gets extended a line of credit. Or who get into which college, if into one at all. Or who is even offered a job.
Some critics view AI‚Äôs lack of emotional capability as a setback, but for Silicon Valley companies, the detachment is being positioned as a benefit. Mya Systems, an artificial intelligence company focused on automating the hiring process, created Mya, a chatbot that can interview and evaluate job candidates. Mya asks performance-based questions and doesn‚Äôt see a candidate‚Äôs name, gender, or photo during the evaluation process. The decision to hire is based solely on the interviewee‚Äôs accomplishments and capabilities‚Ää‚Äî‚Äänothing more. This could serve as a groundbreaking solution for meritocracy-based hiring and a potential solve for the tech industry‚Äôs ongoing diversity struggle. Major companies like Twitter, Facebook, Apple, and Pinterest have come under fire for failing to diversify their workforces. The criticism has led some companies to hire diversity experts to fix the issue. Pinterest and Apple both brought diversity heads onboard in recent years. However, it might be difficult for AI to fix the overall diversity problem if it has one of its own.
There are so many subjective elements concerning whether a candidate will do well in a certain role within a certain team in a certain company. Many factors are out of the candidate‚Äôs hands altogether. But are the variables quantifiable and if so, who decides which variables are important and what weight they should play in the ‚Äúalgorithm‚Äù?
Bluntly, there is a lack of representation behind the scenes in AI. The developers, product managers, and the leadership in many companies developing AI sort of look alike if you squint. Where is the diversity in this group, the architects of this future, in terms of gender and race? Also in age, sexual orientation, and socioeconomic status, there isn‚Äôt much variance.
Even if only a fraction of AI‚Äôs grand and lofty proclamations are to come to fruition, AI will still have major impact on human lives across the world. There will be positives for sure but what are the unforeseen negatives that are caused by code from developers who represent a narrow subset of humanity?
A recent example of a negative outcome is one Apple faced when it first released its Health app in 2014. Health was touted as an inclusive app where users could log an exhaustive range of health-related activities including tracking their footsteps and logging their sodium consumption. The app also allowed various other health-based apps to ‚Äútalk‚Äù to each other, in an effort to present a full picture of the user‚Äôs overall health. But critics quickly pointed out that women couldn‚Äôt use the app to track menstruation. If there were more female developers on this team, would this oversight have occurred? It is hard to imagine so.
Earlier this year, a pair of Stanford University researchers created AI that could study dating website photos and determine a subject‚Äôs sexuality. The researchers sought to prove that sexuality is caused by exposure to certain hormones, which produce similar facial features in homosexual people. Thus, their study would prove that sexuality is inherent, not learned. Dubbed AI ‚Äògaydar‚Äô, the technological development also caused controversy as many feared it could be used to harm or out LGBTQ people in countries where homosexuality is illegal, despite its original science-based intent. There was also concern because all of the subjects used in the study were white. Was this a function of the race of the researchers?
And in 2016, Beauty.ai hosted an online beauty competition with more than 600,000 entries from all over the world. The competition begs the question of if AI even has any utility in this function, yet still, it produced some concerning results. The AI singled out 44 people as the most attractive. Of those 44 people, only one had notably dark skin, and all the finalists were white except for 6 people. While only 7% of the contestants were from India and Africa, there were still roughly 49,000 non-white people that the AI didn‚Äôt recognize as ‚Äúbeautiful‚Äù. Was this AI‚Äôs code and definition of ‚Äúbeauty‚Äù influenced by what the developers themselves defined as beautiful? And if so, would a more diverse group of engineers involved with Beauty.ai‚Äôs creation show a wider swath of beauty across multiple cultures?
It is easy to write this example off as silly but what happens when the AI is used in meaningful situations, such as deciding whether you should be released on parole?
Who‚Äôs designing AI?
The Guardian reports that nearly 75% of all American engineers and scientists are white. And thus, they often use white subjects in the development of their projects. Roughly 11% of scientists and engineers are African-American or Hispanic, yet they still feel pressure to use white faces in their AI experiments. Joy Buolamwini, a graduate researcher at the MIT Media Lab, admitted that the very technology she was working on didn‚Äôt recognize her face. She suggested that biases have been written into AI‚Äôs code, and thus, the technology struggles to identify faces that aren‚Äôt ‚Äúnormal‚Äù.
Though these AI projects are designed to solve universal problems, they aren‚Äôt informed by a universal view. This becomes a major issue when AI is used for broader and more impactful purposes.
In one instance, AI was used to identify criminals by simply viewing Chinese ID photos. There was an 83% accuracy rate, which was considered a success. But without input from a more diverse group of engineers, or testing on ID photos of people from various backgrounds, how is the ‚Äúcriminal‚Äù profile created? How can we be sure it‚Äôs fair?
A 2016 investigative report from ProPublica revealed that a popular AI used to predict recidivism rates was twice as likely to incorrectly flag black defendants. This is concerning considering police departments are incorporating AI into their risk assessments more and more.
However, the greatest concern isn‚Äôt AI‚Äôs shortcomings now but the potential for these shortcomings to snowball into greater bias, and at that, bias that‚Äôs programmed, and thus, systemic.
Furthermore, the lack of diversity in AI means machines are being programmed with incomplete data. Kriti Sharma, Vice President of Bots and AI at Sage, a cloud-based business software company, recently warned that technology should accurately reflect the way the world works, and even account for the ways in which news, work, and information channels fall short. However, AI can‚Äôt achieve this if more people aren‚Äôt involved on the front end.
There are fears that AI could be misused in other crucial criminal justice matters and in medical care. And, the perpetuation of bias and stereotypes in AI could lead to large knowledge gaps in those who learn and work primarily through modern technology.
Typically an industry will become successful and then we, humans, will retroactively try to fix it and make it more inclusive. The stakes are high for AI as it becomes an integral part of all our lives and we increasingly defer to it.
We need to build AI tools with inclusivity and pluralism baked into its DNA from the outset.
Its role in many industries is greatly expanding and if these problems aren‚Äôt addressed now, we will see large repercussions develop down the road and magnify over time.
One of the most immediate answers to this problem is diverse hiring. It seems like an obvious answer and it‚Äôs easier said than done because the same biases seeping into AI‚Äôs code are present in the hiring decisions and company cultures that exist throughout tech.
Mentorship of young children from diverse backgrounds will help steer tomorrow‚Äôs problem solvers toward the tech world. Access to the best secondary education can ensure those problem solvers are armed with the knowledge they need to make a difference. And then, the industry-leading companies must make space for these engineers on their teams. This may mean re-defining what an ideal candidate looks like or evolving their company culture to be more progressive and inclusive with room for many types of people to feel welcome and succeed.
Questions we may need to ask for when the AI risk warrants it:
Should this AI code be peer-reviewed?
Should a company‚Äôs AI code be on a GitHub repository for anyone to examine?
Should this AI be regulated?
We‚Äôd love to hear what you think. Leave a message in the comments section below.
Looking Inwards at Simon Says
At Simon Says, we strenuously believe in inclusivity and are mindful of these potential biases. For example: most of the world speaks a native language other than English. Thus, we have sought to understand as many languages, dialects, and accents as possible. We now transcribe in 90 languages which cover the vast majority of the world‚Äôs population and we will continue to bring on new languages when possible. This is just one of the steps we have taken.
Another important decision has been in hiring. We have an experienced, talented, and gender-diverse team that represent a broad range of cultures and perspectives. In our mission to help customers find that meaningful dialogue, we are in a better position to do so with a team with diverse and unique experiences.
Questions or comments? Leave us a comment below or send us a message on Twitter.