Resolved clauseShareholders request that Meta Platforms, Inc. prepare a report, at reasonable cost and omitting proprietary or privileged information, detailing the company’s policies, practices, and effectiveness in combating hate on its platform(s) and services, specifically antisemitism, anti-LGBTQ+ and anti-disability hate. The report may evaluate the adequacy of moderation, enforcement, user protection, ad policies, and transparency efforts, with findings made publicly available within one year.
Supporting statementWe believe ADL’s annual survey highlights the need for Meta to address antisemitism, anti-LGBTQ+, and anti-disability hate to foster a safer online environment.1 Following the October 7, 2023, Hamas attack on Israel, antisemitism surged, and 41% of Jewish adults reported altering their online behavior to avoid being recognized as Jewish. LGBTQ+ individuals were the most harassed group surveyed, with physical threats doubling (6% to 14%) and severe harassment against transgender people rising from 30% to 45%. People with disabilities also faced increased harassment, with 45% reporting general harassment (up from 35%) and 31% experiencing severe harassment (up from 20%).
A detailed report on Meta’s efforts to combat hate would provide shareholders critical insights into corporate policies designed to protect users from harm. Ineffective moderation may drive users to platforms with stronger protections and deter advertisers prioritizing brand safety, reducing engagement and revenue.
To secure long-term profitability and user trust in a competitive social media landscape, Meta must prioritize content moderation. At Meta’s discretion, the report may include, but not limited to, the following areas:
•Expertise: Integration of antisemitism, anti-LGBTQ+ and anti-disability experts to enhance policies and staff training.
•Content Moderation, Advertising and Policies: Alignment with best practices to address hate, including removing terrorist support and harmful conspiracy theories. In 2023, ADL and the Tech Transparency Project found that Facebook and Instagram were recommending antisemitic content, including Nazi propaganda, and continued to host some hate groups that violated policies.2
•Enforcement Mechanisms: Evaluate tools for detecting and removing antisemitic content and hate speech. ADL’s 2024 Center for Tech and Society found Facebook and Instagram’s reporting mechanisms fundamentally broken, failing to address antisemitic content effectively. In the 2023 CTS Holocaust Denial Report Card, Meta’s platforms scored a C-, trailing behind competitors like Twitch and YouTube.
•User Support: Enhance resources for users experiencing hate speech. In 2023, Meta platforms scored lower than competitors like Twitch, TikTok, and YouTube in supporting harassment targets.3
•Data Transparency: Unlike Reddit and YouTube, Meta’s reports lack critical context, limiting insights into moderation efforts. Current APIs restrict independent researchers from auditing content like comments and stories, as well as WhatsApp. Privacy protections in the Content Library hinder analysis of public figures’ activities. Meta should offer a comprehensive research API allowing privacy-protected access to random samples of public, private, and moderated content for independent auditing.
ADL’s findings highlight the urgent need for strong hate speech moderation. A comprehensive report will reinforce Meta’s commitment to user safety, protect advertiser trust, and safeguard against regulatory risk.