Racial Equity Audit
RESOLVED: Shareholders urge the Board of Directors to commission a third-party, independent racial equity audit analyzing Alphabet Inc.’s adverse impacts on Black, Indigenous and People of Color (BIPOC) communities. Input from racial justice and civil rights organizations and employees, temporary vendors, and contractors should be considered in determining specific matters to be analyzed. A report on the audit, prepared at reasonable cost and omitting confidential and proprietary information, should be published on Alphabet’s website.
WHEREAS: The harmful and often deadly impacts of systemic racism on BIPOC communities are a major focus of policymakers, media, and the public. While Alphabet has made charitable contributions and statements of solidarity with communities of color it must do more to address significant adverse impacts of its policies, practices, and products on communities of color.
Several aspects of Alphabet’s business suggest a racial equity audit would help mitigate reputational, regulatory, legal, and human capital risk. Alphabet’s Google and YouTube have been implicated in perpetuating racism. The New York Times reported YouTube was “successfully weaponized by racists...to undermine Black Lives Matter.” Research shows “YouTube plays a key role in exposing young people to white supremacist ideology and anti-Muslim propaganda.”1
Google’s advertising practices have prompted boycotts by advertisers concerned about discrimination, causing the company to lose advertising revenue. In 2021, five U.S. Senators urged Alphabet to “conduct a racial equity audit...to make the company and its products safer for Black people,” saying “Google Search, its ad algorithm, and YouTube have all been found to perpetuate racist stereotypes and white nationalist viewpoints.”2
Shareholders are concerned with the potential adverse impact of Google’s artificial intelligence (AI) tools on communities of color. Researchers found that an AI tool developed to detect hate speech was up to twice as likely to identify tweets as offensive when they were written with African American Vernacular English (AAVE) or by African Americans.3 Dermatologists have warned that Google’s dermatology app could disproportionately misdiagnose people with dark skin.4 Research found that Google’s face detection technology is susceptible to a range of racial biases.5 Google’s Vision AI labeled a thermometer a “gun” when held by a person of color, but labeled a similar image an “electronic device” when held by a white person.6 Furthermore, there are concerns that Google’s technology may be used by the government to surveil immigrants of color.7
Executives at peer companies have affirmed the usefulness of racial equity audits,8 as have civil rights organizations.9
Despite these and other issues, Alphabet has allegedly retaliated against employees who flagged issues of discrimination.10 In 2020, nine lawmakers wrote to Alphabet with concerns after Google fired Dr. Timnit Gebru, co-lead of Google’s AI Ethics team, who led research on discriminatory technology. In 2021, employees told reporters11 that when they reported workplace racism, they were told to “assume good intent,” seek counseling, or take leave.
3 https://www.newscientist.com/article/2213064-googles-hate-speech-detecting-ai-appears-to-be-racially- biased/#ixzz771qKjsPa
4 https://www.vice.com/en/article/m7evmy/googles-new-dermatology-app-wasnt-designed-for-people-with- darker-skin 5 https://venturebeat.com/2021/09/03/bias-persists-in-face-detection-systems-from-amazon-microsoft-and- google/