Governments, companies and investors all have roles to play in ensuring that AI is a force for good.
We invest in a number of companies for whom artificial intelligence (AI) is a key element in their business models and plans for future growth. As investors, we recognise the transformational potential of this remarkable technology and the significant benefits it could bring in terms of productivity, more advanced healthcare and tackling climate change.
At the same time, as stewards of our clients’ capital, it is important that we play a role in ensuring that AI develops in a way that minimises risks to society and our clients’ investments.
Restricting use of potentially risky technologies is consistent with the precautionary principle that there is a social responsibility to protect the public from harm if scientific investigation has identified a plausible risk.
The areas of potential concern are wide-ranging, but there are four key issues that pose immediate risks and should be prioritised by regulators and companies involved in developing and providing AI products.
Disruption of democratic processes through manipulation, misuse of personal data, fake news and emotionally charged content. Behavioural nudges and microtargeting techniques have been used in elections since Bill Clinton’s 1996 campaign.[1] Now, AI has the potential to vastly increase the scale and effectiveness of microtargeting by tailoring messages to millions of individual voters. In addition, it can use a trial-and-error approach called reinforcement learning to make these messages progressively more persuasive.
Perpetuation of discrimination via AI models based on biased or incomplete data. For example, by using previous candidates’ resumes to help shortlist job candidates, an automated recruitment system may perpetuate gender biases such as preferring men for IT roles.
Risks to data security, the functioning of public and private platforms and personal security risks linked to chatbots. For most of April this year, ChatGPT was banned in Italy following concerns that it broke Europe’s General Data Protection and Regulation (GDPR) rules. Also in April, a journalist in the US used AI tools to ‘clone’ herself and bypass her bank’s security.[2]
Carbon emissions. AI applications need vast amounts of energy, both to train them and run them. Training Open AI’s Chat GPT-3 model consumed around 1,287 MW hours of electricity, causing emissions of 550 tons of CO2e – equivalent to 550 return flights between San Francisco and New York.[3]
In addition, other risks are attracting concern but are yet to manifest themselves in practice. These include breaches of intellectual property rights, as well as safety risks associated with autonomous vehicles, AI-applied medical treatments and AI-optimised production processes.
Regulation alone is not enough
Given the sheer sweep of risks that unregulated AI development and use pose, effective international AI standards are needed to create a level playing field and underpin robust, independent audits to certify compliance.
Numerous legislative initiatives are in progress, including the EU’s Artificial Intelligence Act, the US Algorithmic Accountability Act and the UK’s Pro-innovation Approach to AI Regulation[4]. China’s, Ministry of Science & Technology published its AI Ethics Principles in 2021 and in July 2023 its internet watchdog, the Cyberspace Administration, issued “Interim Measures for the Management of Generative Artificial Intelligence Services”.
However, it is unlikely that the US, Europe, UK and China will agree on a cohesive approach to regulating AI. It is also unlikely that policymakers can keep up with the pace of AI development.
Until comprehensive government regulation is introduced, self-regulation by companies will be vital. Investors have two critical roles to play. The first is to influence self-regulation by demanding best practice from AI companies; the second is to call for action by policymakers to create a supportive regulatory environment for responsible AI.
Taking a lead
We are an active member of the World Benchmarking Alliance (WBA) Digital Collective Impact Coalition (CIC). This investor group focuses on AI ethics and represents the interests of over $6.9 trillion of assets under management.
In this capacity, we have been engaging with three companies, encouraging them to make public commitments to ethical AI principles. These are Amazon (where we co-lead with other Digital CIC members), Apple and PayPal.
Alongside us, other members of the group are engaging with a raft of companies that are reluctant to make public commitments to responsible or ethical AI principles. These include Twitter, Spotify, eBay, Salesforce, Oracle, Airbnb and Alibaba.
With a further 50 names being added to the WBA’s list of focus companies, we have applied to lead engagements with ASML and Keyence, both of which are holdings in many of our clients’ portfolios. We will also be engaging with companies that we hold which are not part of the Digital CIC, but where we see a need for strong ethical AI policies.
Ethical AI principles are the first step
Encouraging companies to make public commitments to ethical AI principles is an essential first step, but it is just the beginning. By bringing together like-minded investors, Digital CIC can also bring significant collective pressure to bear in establishing new norms for responsible AI such as stress-testing AI technologies for risks to human rights (human rights impact assessments) and promoting openness about how algorithm-based decisions are made.
Most important will be establishing governance systems that specify responsibilities, oversight processes and enforcement measures. Companies should complete annual independent audits of how they are implementing ethical AI principles and policies, and publish summary results.
The ability to learn collectively from mistakes will also be vital: companies should therefore disclose takeaways from their impact assessments, including specific cases of failure and remediation steps taken.
We are evolving our approach from asking companies to adopt robust public AI principles and policies to assessing how they implement best practice in their day-to-day business. Looking ahead, Sarasin and Partners will continue to focus on assessing and preventing risks associated with specific riskiest use cases of AI from a human rights perspective, such as targeted advertising, content dissemination and facial recognition.
[1] The Evolution of American Microtargeting: An Examination of Modern Political Messaging, Luke Bunting, 2015
[2] The Wall Street Journal, I cloned myself with AI. She fooled my bank and my family, 28 April 2023
[3] Carbon Credits.com; How Big is the CO2 Footprint of AI Models? ChatGPT’s Emissions, 30 May 2023
[4] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper; see also https://policyreview.info/articles/analysis/artificial-intelligence-regulation-united-kingdom-path-good-governance
Important information
If you are a private investor, you should not act or rely on this document but should contact your professional adviser.
This document has been approved by Sarasin & Partners LLP of Juxon House, 100 St Paul’s Churchyard, London, EC4M 8BU, a limited liability partnership registered in England & Wales with registered number OC329859 which is authorised and regulated by the Financial Conduct Authority with firm reference number 475111.
It has been prepared solely for information purposes and is not a solicitation, or an offer to buy or sell any security. The information on which the document is based has been obtained from sources that we believe to be reliable, and in good faith, but we have not independently verified such information and no representation or warranty, express or implied, is made as to their accuracy. All expressions of opinion are subject to change without notice.
Please note that the prices of shares and the income from them can fall as well as rise and you may not get back the amount originally invested. This can be as a result of market movements and also of variations in the exchange rates between currencies. Past performance is not a guide to future returns and may not be repeated.
Neither Sarasin & Partners LLP nor any other member of the J. Safra Sarasin Holding Ltd group accepts any liability or responsibility whatsoever for any consequential loss of any kind arising out of the use of this document or any part of its contents. The use of this document should not be regarded as a substitute for the exercise by the recipient of his or her own judgment. Sarasin & Partners LLP and/or any person connected with it may act upon or make use of the material referred to herein and/or any of the information upon which it is based, prior to publication of this document. If you are a private investor you should not rely on this document but should contact your professional adviser.
© 2023 Sarasin & Partners LLP – all rights reserved. This document can only be distributed or reproduced with permission from Sarasin & Partners LLP.