Resolved clauseRESOLVED: Shareholders request that the Board of Directors publish a report, at reasonable cost and omitting proprietary or legally privileged information, evaluating whether Salesforces development, marketing, and deployment of artificial intelligence (AI) technologies comply with the companys stated human-rights commitments and AI Ethical Use principles. The report should assess any material risks to the company arising from misalignment, including reputational, legal, and operational risks and describe Board-level processes and oversight structures for identifying and addressing AI-related human rights risks on an ongoing basis.
Supporting statementSalesforce has long positioned itself as a global leader in responsible technology, equality, and human-rights stewardship.1 The company?s Responsible AI & Technology commitments, Ethical Use Policy, Ethical Use Advisory Council, and participation in industry initiatives such as the UNESCO Global Business Council for the Ethics of AI reflect these principles and create obligations and reputational exposure that warrant Board-level oversight. However, recent developments raise concerns about whether Salesforce?s AI deployment practices align with its stated commitments and whether existing safeguards sufficiently mitigate potential legal, reputational, and customer trust risks. Compared to industry leaders, Salesforce lags in AI-governance transparency and accountability. Our company relies primarily on its Office of Ethical and Humane Use and ?consequence scanning? workshops and does not publish audits or impact assessments.2 Evaluations show that Microsoft and Google reinforce their AI principles with responsible AI-toolkits and annual transparency reporting.3 Experts agree that principles without transparency are insufficient for assessing real-world risk.4 Without adequate AI-safeguards, companies remain exposed to legal, regulatory, and reputational risks.5 Shareholders are increasingly attentive to AI-related risks. Support for AI proposals in 2025 has surpassed support for other environmental and social proposals,6 and AI-related resolutions more than quadrupled from 2023 to 2024.7 Companies are already experiencing legal exposure, as shown by the EEOC?s first federal enforcement action in 2023 against an employer using discriminatory AI hiring system, resulting in a monetary settlement and corrective action.8 The Organisation for Economic Co-operation and Development (OECD) similarly warns that AI risks, including discrimination, bias and safety failures, will increase as adoption accelerates, absent strong transparency measures.9 Given these concerns, shareholders believe a Board-level assessment would enhance oversight of how Salesforce?s AI technologies are developed, marketed, and deployed, and help evaluate whether these activities align with the company?s stated human-rights commitments. Increased transparency would help support long-term shareholder value, reinforce Salesforce?s responsible innovation leadership, and promote alignment between public commitments and operational practices. Therefore, we urge shareholders to vote FOR this proposal. 1 https://www.salesforce.com/news/stories/most-ethical-companies-2025/ 2 WBA-Ethical-AI-CIC-Data-set-as-of-20-09-2024.pdf 3 Jamie Patrick Lavin, Ethical Dimensions of AI in Business: A Study of Corporate Responsibility and Governance (2024), comparative analysis of Microsoft, Google, Salesforce, IBM, Apple, and OpenAI. 4 https://www.sciencedirect.com/science/article/pii/S0963868724000672 5 https://news.mongabay.com/2024/07/colombian-victims-win-historic-lawsuit-over-banana-giant-chiquita 6 https://connect.sustainalytics.com/hubfs/INV/Reports/Proxy-Voting-Insights-Investor-Views-on-AI-Oversight-2025.pdf 7 https://corpgov.law.harvard.edu/2025/04/02/ai-in-focus-in-2025-boards-and-shareholders-set-their-sights-on-ai/ 8 https://www.reuters.com/legal/tutoring-firm-settles-us-agencys-first-bias-lawsuit-involving-ai-software-2023-08-10/ 9 Assessing potential future artificial intelligence risks, benefits and policy imperatives | OECD