Resolved clauseShareholders request the Board of Directors of Microsoft Corporation assess and issue a report within the next year, at reasonable cost and excluding confidential information, evaluating how it oversees reputational, operational, legal, and other risks related to GenAI bias against religion (including religious views) or political views, and whether such discrimination may impact customers’, users’, and others’ exercise of their constitutionally protected civil rights.
Supporting statementGenerative AI (GenAI) is revolutionizing how individuals access and process information. Marc Andreessen has said AI “is highly likely to be the control layer for everything in the world” and “what AI is allowed to say/generate will be even more important – by a lot – than the fight over social media censorship.”1 These implications are especially urgent given Microsoft’s scale, integration of AI models like Copilot, Bing AI, and Azure OpenAI, and its central role in the marketplace and government adoption of GenAI.
But many GenAI systems – including those Microsoft offers – rely on content moderation policies that prohibit so-called “misinformation” and “hate speech.” These vague and subjective terms invite censorship of legitimate views on contested but critical topics, including public health, religion, sexuality, and bioethics.2 The Viewpoint Diversity Score Business Index3 has documented the prevalence of such policies at nearly every major tech company.
Microsoft, for its part, prohibits its AI from generating “hate speech” or “content that “insults, targets, or excludes individuals or groups” based on “gender identity, sexual orientation,” or “any other characteristic that is associated with systemic prejudice or marginalization.”4 This and similar policies stand in stark contrast to Microsoft’s commitment to design AI systems that “treat all people fairly” and are “inclusive,” and its company-wide commitment to advance “freedoms of opinions” and “expression,” including supporting the work of those who “engage in activities and advocacy that contribute to the protection of human rights and the rule of law, good governance, tolerance, and diversity and inclusion.”5
Meanwhile, powerful voices continue to advocate for more censorship. The World Economic Forum recently ranked “misinformation and disinformation,” particularly as amplified by AI, as the #1 short-term global risk, above inflation, climate change, and societal polarization.6 The European Union’s Digital Services Act and similar laws in over 20 other countries obligate or strongly encourage AI providers to suppress disfavored speech – pressuring companies like Microsoft to act as global censors.7
The fall of the Global Alliance for Responsible Media,8 Gemini’s debacle with black founding fathers,9 and Microsoft’s own blowback from its AI screening the Bible10 and political content show11 that this heavy-handed censorship generates material reputational risk. It also raises serious legal risk because of concerns about illegal discrimination.
Microsoft should take meaningful steps to evaluate and mitigate the risk that its AI systems are marginalizing certain viewpoints. Doing so aligns with its own stated principles, supports user trust, and reinforces the foundational rights of expression and belief that are essential to a free and diverse society.