Meta (FACEBOOK, INC.) | Report on Generative Artificial Intelligence Misinformation and Disinformation at Meta (FACEBOOK, INC.)

Status
Filed
AGM date
Previous AGM date
Proposal number
6
Resolution details
Company ticker
FB
Lead filer
Resolution ask
Report on or disclose
ESG theme
  • Social
ESG sub-theme
  • Digital rights
Type of vote
Shareholder proposal
Filer type
Shareholder
Company sector
Technology
Company HQ country
United States
Resolved clause
Resolved: Shareholders request the Board issue a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, assessing the risks to the Company’s operations and finances, and to public welfare, presented by the Company’s role in facilitating misinformation and disinformation disseminated or generated via generative Artificial Intelligence; what steps the Company plans to take to remediate those harms; and how it will measure the effectiveness of such efforts.
Whereas clause
Whereas: There is widespread concern that generative Artificial Intelligence (gAI) —generated through Meta’s tools and disseminated across its platforms — threatens to amplify misinformation and disinformation globally, posing serious threats to the Company, human rights, and democratic processes. This is of particular concern as 2024 will feature critical elections in the United States, India, Mexico, and Russia.1
Sam Altman, leading AI executive, said he is “particularly worried that these models could be used for large-scale disinformation.”2 Eurasia Group ranked gAI the third highest political risk confronting the world, warning new technologies “will be a gift to autocrats bent on undermining democracy abroad and stifling dissent at home.”3
With Meta’s recent development of gAI products, including conversational assistants and advertising tools, the Company is increasingly at risk from misinformation and disinformation generated through its own products. Meta recognizes this risk, stating these tools “have the potential to generate fictional responses or exacerbate stereotypes it may learn from its training data.”4
Meta must also address gAI misinformation and disinformation disseminated across its platforms. The Company has long struggled with effective content moderation, even prior to the introduction of gAI. In 2022, Meta promoted content questioning the validity of Brazil’s election.5 Meta was found to play a “critical role” in the spread of false narratives that fomented the violence in the United States Capital on January 6, 2021.6 And Meta failed to mitigate Russian operatives’ widespread disinformation campaign during the 2016 United States presidential election.7
While Meta has publicly acknowledged the risks of gAI and outlined some guardrails, it continues to prioritize gAI product development without addressing the existential risks posed by the technology. In November, Meta split up its team responsible for understanding and preventing harms associated with its AI technology.8
Legal experts believe content generated from Meta’s own technology is unlikely to be shielded by Section 230 (Communications Decency Act), which has historically provided legal protection when third party content is posted.
Shareholders are concerned Meta incurs significant legal, financial, and reputational risk due to its rapid development and deployment of gAI products and the dissemination of gAI-content across its platforms, absent parallel assessments of the threats this poses to the Company and society.
Supporting statement
1 https://time.com/6333288/tech-companies-ai-misinformation/
2 https://www.cnbc.com/2023/03/20/openai-ceo-sam-altman-says-hes-a-little-bit-scared-of-ai.html
3 https://www.eurasiagroup.net/issues/top-risks-2023
4 https://about.fb.com/news/2023/09/building-generative-ai-features-responsibly/
5 https://time.com/6333288/tech-companies-ai-misinformation/
6 https://www.propublica.org/article/facebook-hosted-surgeof-misinformation-and-insurrection-threats-in-months-leading-up-to-jan-6-attack-records-show
7 https://www.vox.com/policy-and-politics/2023/1/20/23559214/russia-2016-election-trolls-study-email-hack
8 https://www.theinformation.com/articles/meta-breaks-up-its-responsible-ai-team

How other organisations have declared their voting intentions

Organisation name Declared voting intentions Rationale
Comgest For https://www.comgest.com/-/media/comgest/esg-library/esg-en/2024-proxy-voting-pre-declaration.pdf

DISCLAIMER: By including a shareholder resolution or management proposal in this database, neither the PRI nor the sponsor of the resolution or proposal is seeking authority to act as proxy for any shareholder; shareholders should vote their proxies in accordance with their own policies and requirements.

Any voting recommendations set forth in the descriptions of the resolutions and management proposals included in this database are made by the sponsors of those resolutions and proposals, and do not represent the views of the PRI.

Information on the shareholder resolutions, management proposals and votes in this database have been obtained from sources that are believed to be reliable, but the PRI does not represent that it is accurate, complete, or up-to-date, including information relating to resolutions and management proposals, other signatories’ vote pre-declarations (including voting rationales), or the current status of a resolution or proposal. You should consult companies’ proxy statements for complete information on all matters to be voted on at a meeting.