AI Fairness, Accountability, and Tranparency

Resolution Text

WHEREAS: Systemic social and racial inequities exist in the American healthcare system. Disparities in life expectancy, chronic disease prevalence, and access to care in minority communities are significant. The COVID-19 pandemic has illuminated these differences as Black, Latinx, and Indigenous communities experience higher rates of hospitalization and death.



Artificial intelligence and machine learning are increasingly applied in healthcare settings to direct care and allocate resources. These technologies offer novel benefits, but also risk exacerbating discrimination facing marginalized groups. Algorithms reflect patterns and implicit biases of the environments they are created in. Disparities in access to and interaction with the healthcare system experienced by certain populations are likely to be reflected in algorithmic systems trained on historical assumptions and datasets1. Medical algorithms have been found to overlook or work less accurately when used on patients of color2.



Companies developing and using artificial intelligence, including in clinical settings, face pressure to ensure their products do not contribute to injustices. The United Nations3 and World Health Organization4 urge transparency and accountability regarding use of algorithmic decision-making. The UN Guiding Principles on Business and Human Rights is the global authoritative framework on companies’ responsibility to respect human rights – of which racial equity is inextricable – throughout their value chains. Frameworks addressing transparency and fairness are emerging and being used to inform companies and regulators5. In response, leading companies, including Microsoft and Philips, have established principles for the responsible use of artificial intelligence and are taking action to uphold their commitments.



Cerner develops software utilizing artificial intelligence, including clinical decision support and data analytics tools. It acknowledges the potential risks described above, but has not sufficiently demonstrated policies, processes, and governance structures to identify and address actual and potential impacts within its business activities. By proactively addressing algorithmic fairness, accountability, and transparency in its operations and products, Cerner can mitigate reputational, regulatory, and financial risk, strengthen trust with customers and community stakeholders, and contribute to a more equitable healthcare system.



RESOLVED: Shareholders request Cerner to publish a report assessing the racial equity impacts of the algorithmic systems used in its products and services. The report, prepared at reasonable cost and omitting proprietary information, should be published on the company’s website.

SUPPORTING STATEMENT: Proponents suggest that the report include information on:



• Governance structures to implement and oversee fair, accountable, and transparent artificial intelligence systems that align with guidance and delineations of racial equity and human rights as set out by the UN, FDA, and/or other authoritative organizations;

• Policies, programs, and/or processes, including use of external audits or other validation tools, to evaluate existing and future products and services for bias or discrimination throughout their lifecycles, above and beyond legal compliance;

• Remediation processes if biased or discriminatory outcomes or disparate impacts are identified; and

• Input from stakeholders, including clinical artificial intelligence experts, diverse patient populations, and other affected communities.

1 https://pubmed.ncbi.nlm.nih.gov/30128552/

2 https://www.wsj.com/articles/researchers-find-racial-bias-in-hospital-algorithm-11571941096?mod=article_relatedinline, https://www.statnews.com/2020/10/13/how-software-infuses-racism-into-us-health-care/

3 https://www.ohchr.org/EN/Issues/DigitalAge/Pages/cfi-digital-age.aspx

4 https://www.who.int/publications/i/item/9789240029200

5 https://www.chicagobooth.edu/research/center-for-applied-artificial-intelligence/research/algorithmic-bias, https://www.nist.gov/news-events/news/2021/06/nist-proposes-approach-reducing-risk-bias-artificial-intelligence,

https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles

Lead Filer

Rachel Nishimoto
Parnassus Investments