We strongly believe investors should have a say in fostering better management of the risks associated with unethical AI deployment.
Sarasin & Partners hosted an Investor Seminar on Corporate Accountability on Artificial Intelligence – Shaping Investor Actions on AI Ethics on December 5th 2023, alongside the World Benchmarking Alliance (WBA) and its Collective Impact Coalition (CIC) for Digital Inclusion. The event attracted 68 participants, attending in person or virtually.
The seminar took place shortly after the first Global AI Safety Summit, which happened at Bletchley Park in early November 2023. This offered a good opportunity to discuss the potential impact of decisions made at the summit. The summit, which drew together government officials from 28 countries, representatives from the biggest global tech sector companies, academia and civil society organisations, has adopted The Bletchley Declaration, debated the role of different actors in emerging processes to address safety risks, agreed to support the development of an independent ‘State of the Science’ report and to establish AI Safety Institutes. Additionally, a few breakthrough analytical documents, including an AI risk classification, had been developed and published in the run-up to the summit.
The responsibility of investors: what should be on the stewardship agenda?
We strongly believe investors should have a say in fostering better management of the risks associated with unethical AI deployment, which is a sustainability issue. At our seminar, we discussed what might constitute reasonable investor expectations and the questions investors should be asking in view of these recent developments.
We structured our seminar into a Fireside Chat and an Investor Panel.
The fireside chat was led and moderated by Howard Covington, an Honorary Fellow of the Alan Turing Institute and its founding Chair. He was also an investment banker and asset manager in the City. The other two speakers were from the Centre for the Study of Existential Risk (CSER) at the University of Cambridge:
- Seán Ó hÉigeartaigh, Programme Director of AI: Futures and Responsibility
- Dr Maurice Chiodo, Research Associate
The speakers addressed the ‘what’s needed’ of investors’ stewardship agenda on Ethical AI. AI has significantly changed the landscape over the past 10 years, evolving from a facial recognition tool to powerful large language models (LLMs) such as GPT-4 with hugely enhanced capacities. In the future, progress will likely be even more rapid. This technology has the potential for a multi-trillion-dollar economic disruption over the next decade. According to estimates by Goldman Sachs, it will create $7 trillion of economic growth in the period. One management consultancy has indicated a 25% productivity gain.
There will undoubtedly be significant benefits across society, but we can also see multiple risks. Approximately 300 million jobs will likely be affected in some way, in addition to threats posed by deepfakes, breaches of data privacy and intellectual property rights, to name a few.
At a high level, we can define two main groups of AI-related risks that need to be analysed and addressed: those for developers of frontier generative AI models and those for specific applications.
The Bletchley Summit focused on ‘frontier AI’ systems and their risks and governance. Present techniques are shown to follow scaling laws: as you increase the amount of data, and computing is applied to bigger models, you achieve predictable improvements, but not necessarily predictable capabilities. Their outputs are very much shaped by the new data inputs. Without careful safeguards, these systems can teach someone how to make a bomb from household materials, or how to design a dispersal mechanism for a biological agent. They can also be used to produce and quickly disseminate targeted propaganda, deepfakes or misinformation.
It is clear that human AI quality assurance experts should be involved in the testing process. Ahead of the Summit, the UK government released a set of 42 best practice processes across nine categories which they recommend frontier companies should be following. The recommended best practice included sharing information on models with auditors or governments ahead of release, committing to third-party auditing of elements such as training datasets, implementing robust data controls, policies for detecting and preventing model misuse, strong security practices around model use, as well as having external evaluations of model performance at various development stages.
Furthermore, at the request of the government, six leading developed market frontier AI developing companies – OpenAI, Anthropic, Google DeepMind, Meta, Amazon and Microsoft – released their AI safety policies. A group of specialists from top universities in the UK and US, led by Seán Ó hÉigeartaigh, reviewed these policies against the recommended best practice processes.
The research found that AI startup company Anthropic performed best, though no company received a perfect score. Amazon had the lowest-ranked performance: they gave very little detail and no commitment to practices like third-party auditing.
Further, Dr Maurice Chiodo shared his view of the ethical practices that AI investors may wish to inquire about at the application level, as presented in his recent research paper. These included 10 elements, listed in the box below.
Ten elements of ethical practices investors may wish to inquire about, according to Dr Maurice Chiodo
- Deciding whether to begin: should this company be doing AI at all? What is its purpose? Is this type of AI really needed?
- Diversity and perspectives: is the team developing or deploying the model diverse enough help reduce problematic or biased outcomes?
- Handling data and information: are only authorised and morally obtained datasets being used, and being kept sufficiently secure?
- Data manipulation and inference: is this data relevant for the intended context? Can you explore any unexpected difficulties? E.g. there have been cases resulting in a biased soap dispenser, which only recognised white hands.
- The mathematical formulation of the problem: what are the optimisation objectives? What is the model trying to achieve?
- Communicating and documenting the work: will all the processes and decisions be documented? What is communicated from the top down and from the bottom up?
- The ability to falsify and feedback loops: can the outputs of the model be falsified? Does it affect, and then relearn from, the world in a way that might create feedback loops, such as in the case of predictive policing?
- Explainable and safe AI: is it actually doing what you want it to do? Can you explain what it is doing and justify its outputs?
- The politics of the surrounding world: technical artefacts have politics. How are these being considered? E.g. the Ofqual grading algorithm debacle and lessons learned from it.
- Emergency response strategies: what systems are used for non-technical responses to technical issues? E.g. when a customer has searched on Amazon for a book about suicide, Amazon also suggested a rope as an additional purchase.
How investors can address these issues
Our Investor Panel addressed the ‘hows’ of investors’ stewardship activities. Julia Shatikova, Ownership Lead at Sarasin & Partners, moderated the discussion among four speakers:
- Amy Wilson, Head of Stewardship, Norges Bank Investment Management
- Asad Butt, Investment Stewardship Lead, HSBC Asset Management
- Nikki Gwilliam-Beeharee, Investor Engagement Lead, WBA
- Josh Sambrook-Smith, Technology Analyst and Portfolio Manager, Sarasin & Partners
The panellists all agreed on the importance holding companies to account for their ethical AI practices, given how substantial the investment risks involved in unethical AI performance are.
As AI is now quite ubiquitous in business solutions, it is a task in itself to identify the companies with the biggest exposure to AI sustainability risks. Prioritising those companies that invest the most in AI, as well as those that are the biggest users of AI for their products and services, could be a way to ensure the biggest positive impact and a potential ripple effect across the market.
There are several ways to approach this. Firstly, you can separate the ecosystem into companies that make the models, companies that use them and companies affected by them. Secondly, you can review the specific use cases. Companies involved in surveillance or facial manipulation technologies would warrant more attention than companies using an LLM to auto-generate jokes. Lastly, you’d want to focus on the raw materials that these technologies use. Companies using medical imaging data, credit card data or any kind of sensitive personal information certainly warrant more attention from an ethical perspective than companies analysing data on tectonic plate movements.
A tailored engagement strategy
Investors need to tailor their engagements to specific business models, and not only by AI developer versus application. Importantly, investors should look into specific use cases and adjust expectations accordingly. One way is to develop a matrix of use cases and specific risks. E.g. risks for LLMs and the chatbots based on these models would be different from the applications in protein gene sequencing or sales and marketing.
We can also use a similar matrix of tools that companies should deploy to manage those risks. Investors should look beyond the published declarations and focus on how companies’ ethical AI principles are operationalised. At a basic level of safety, there should be generic controls such as guardrails on data sources, marking AI-generated content, prevention of misuse and deepfakes, red-teaming (interacting with a system to try to make it produce undesired outcomes), as well as human rights impact assessments or data audits.
Beyond this, there should be more specific best practice tools and governance structures in certain use cases. The risks and opportunities will differ from a market and sector perspective, which means the type of questions, objectives and indicators may well differ.
Escalating investor concerns
Investors also need to consider what escalation techniques to apply if companies do not respond or the engagement does not result in meaningful change.
Shareholder resolutions: this is one such option, as we saw being used at Microsoft and Alphabet during 2023. However, with dual share structures, or when a shareholder proposal is worded too prescriptively or too broadly, there is not likely to be strong support.
Voting against directors to keep them accountable for AI ethics failures can be more effective.
Voting policies: At Sarasin & Partners we have a Net Zero Voting Policy, which keeps companies accountable for climate policy failures. We have voted Against or Abstained in relation to 14% of the directors at 68% of the companies on our climate watchlist in 2023. We also voted against executive remuneration, annual reports and auditors where this related to climate concerns.
Investors might consider developing similar dedicated voting policies on ethical AI. Also, given the important role that proxy voting agencies play, we may want to engage with them on what they are going to do in this respect.
Consider the asks and engagement parties: If engagements do not work, investors should re-address their engagement matrix, as they may be not asking the right questions or not asking the right people. Patience is required, alongside an understanding of companies’ business models – as mentioned earlier. In addition, speaking to the right people is key for effectiveness.
Collective engagements: this can be a useful tool. In its 2023 Progress Report, the WBA CIC on ethical AI has found many companies who may not respond to individual investor requests do respond to collective investor engagements.
Influencing policy: Investors can consider influencing the wider ecosystem via, for instance, publicly supporting national regulatory or policy developments that would introduce the need for companies to adopt and implement ethical AI principles. Global principles are also important. Investors could publicly reiterate the need for the UN Global Digital Compact, to be launched at the Summit of the Future in 2024, to have clear expectations on ethical AI development and deployment.
Finally, it is important that investors monitor progress and integrate this in their investment framework. E.g. we should monitor how the seven largest generative AI companies will implement the commitments that they made at the meeting with president Biden in the White House in July 2023.
There are various analytical tools available for monitoring, such as the WBA’s Digital Inclusion Benchmark (DIB), which assesses 200 tech sector and communications companies. Ranking Digital Rights (RDR) publishes a Big Tech Scorecard as part of its broader Corporate Accountability Index that reviews 14 digital platforms and their services, including on a variety of detailed elements related to AI. There is also the University of Stanford’s Centre for Research on Foundation Models (CRFM) Foundation Model Transparency Index and the aforementioned AI Safety Policies assessment framework. More comprehensive models are likely to emerge in 2024.
There is an overall understanding that a common regulatory framework is to be established to create a level playing field. Global and national standards should evolve to ensure that AI governance is in line with public and national security, as well as international competition standards, while still allowing companies to innovate. There are numerous attempts at the moment: by the OECD, the EU and UNESCO, as well as at a national level.
Still, governments do not move as fast as technologies, and commonalities among them are not certain. Self-regulation will remain an important part of ethical AI governance. This is where investors should continue applying their stewardship techniques to foster best practices.
We believe there is a need for further discussions that would bring like-minded investors together to discuss any new challenges, share their experience and best practices that would inevitably evolve as we strive for well-intended and safeguarded frontier AI models with robust safeguards.
We call on all investors interested in taking part in this work to contact us if they would like to get more involved.
 Future iterations of the benchmark and its associated projects will include a separate indicator on ethical AI, expanding on the relevant elements in the current methodology
This document is intended for retail investors and/or private clients. You should not act or rely on this document but should contact your professional adviser.
This document has been issued by Sarasin & Partners LLP of Juxon House, 100 St Paul’s Churchyard, London, EC4M 8BU, a limited liability partnership registered in England and Wales with registered number OC329859, and which is authorised and regulated by the Financial Conduct Authority with firm reference number 475111.
This document has been prepared for marketing and information purposes only and is not a solicitation, or an offer to buy or sell any security. The information on which the material is based has been obtained in good faith, from sources that we believe to be reliable, but we have not independently verified such information and we make no representation or warranty, express or implied, as to its accuracy. All expressions of opinion are subject to change without notice.
This document should not be relied on for accounting, legal or tax advice, or investment recommendations. Reliance should not be placed on the views and information in this material when taking individual investment and/or strategic decisions.
The value of investments and any income derived from them can fall as well as rise and investors may not get back the amount originally invested. If investing in foreign currencies, the return in the investor’s reference currency may increase or decrease as a result of currency fluctuations. Past performance is not a reliable indicator of future results and may not be repeated. Forecasts are not a reliable indicator of future performance.
Neither Sarasin & Partners LLP nor any other member of the J. Safra Sarasin Holding Ltd group accepts any liability or responsibility whatsoever for any consequential loss of any kind arising out of the use of this document or any part of its contents. The use of this document should not be regarded as a substitute for the exercise by the recipient of their own judgement. Sarasin & Partners LLP and/or any person connected with it may act upon or make use of the material referred to herein and/or any of the information upon which it is based, prior to publication of this document.
Where the data in this document comes partially from third-party sources the accuracy, completeness or correctness of the information contained in this publication is not guaranteed, and third-party data is provided without any warranties of any kind. Sarasin & Partners LLP shall have no liability in connection with third-party data.
© 2024 Sarasin & Partners LLP – all rights reserved. This document can only be distributed or reproduced with permission from Sarasin & Partners LLP. Please contact [email protected].