Viewpoint
- As AI opens up a plethora of new opportunities for organisations, boards and management must seek to balance those opportunities with consideration of responsible concepts like ‘good intention’ and ‘fair impact’
- Internal governance measures specific to AI are a new and rapidly evolving component of peer-leading governance frameworks aimed at retaining consumer and investor confidence as organisations seek to harness the power of AI-related technology
- With ESG deeply integrated into our investment approach we watch closely to see how listed companies consider both the opportunities and risks inherent in AI technology.
With the common assumption that artificial intelligence (AI) and its many applications will change our everyday lives, it is easy to focus on the impending benefits. In time, ChatGPT completing a teenager’s homework will be but one embryonic user case (and teacher’s curse) at the dawn of the ‘AI revolution’.
Beyond individual productivity gains, organisations worldwide are carefully considering the potential applications of generative AI with a view to driving process efficiencies, reducing costs, managing risk, creating new products and improving customer service. And the benefits are beginning to show: A 2023 study on the use of generative AI to help customer support agents found access to AI assistance increases worker productivity by 14 percent, enabling more customers to be served every hour.* Quite suddenly, organisations are seeing the non-adoption of AI as a business risk not evident even a mere matter of months ago.
Applications leveraging AI are already developing faster than regulatory frameworks, potentially contributing to a range of unintended consequences. Failures in early-stage AI systems design, deployment and management have already led to negative – and sometimes devastating – outcomes, including the disastrous out-workings of the Robodebt Scheme and in 2018 a self-driving vehicle failing to detect a pedestrian, resulting in a fatal crash.** More recently and closer to home, an AI-generated deepfake of CBA’s CEO Matt Comyn promoted an investment scam to Facebook and Instagram users.***
The rapidly-evolving AI landscape gives rise to the concept of ‘Responsible AI’. Off the back of a 2022 report finding that only a third of global consumers trust how AI is being implemented by organisations,^ Accenture defined ‘Responsible AI’ as the practice of designing, developing and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society. Given that there is no specific AI legislation under the Australian legal framework the Australian government has recently sought consultation on Responsible AI to inform appropriate policy responses. However, it may be some time before a fit-for-purpose regulatory framework is in place.
In this context we expect organisations will need to adopt Responsible AI principles as a discrete component of broader self-regulation frameworks – and ahead of the enactment of regulatory frameworks underpinned by legislation. When we assess companies for inclusion in portfolios, indicators of the maturity of thinking towards Responsible AI include formal AI-related development plans, resource allocation, the current and proposed structure of internal governance specific to AI, and broader strategic response. Our initial focus is on sectors with heightened impacts including organisations handling large amounts of personal customer data (such as banks and healthcare providers), however we expect in time there will be no sector materially unchanged by AI in one form or another.
Parting thought
As part of our integrated approach to ESG across our investment strategies, we are aware of the need to evolve our thinking on what responsible investing encapsulates. So it goes then, that the responsible adoption of AI technology by companies will increasingly form part of our ESG assessment. We watch closely to see how listed companies, both large and small, make the most of the AI opportunities which lie ahead – but also heed the risks.
* Erik Brynjolfsson, Danielle Li and Lindsey Raymond, ‘Generative AI at work’ (Working Paper No 31161, National Bureau of Economic Research, 25 April 2023).
** Lauren Smiley, ‘I’m the Operator’: The Aftermath of a Self-Driving Tragedy’, Wired (online, 8 March 2022).
*** AFR, CBA jostles with AI-generated Matt Comyn scam.
^ Accenture, 2022 Tech Vision research.
Disclaimer
This information was prepared and issued by Maple-Brown Abbott Ltd ABN 73 001 208 564, Australian Financial Service Licence No. 237296 (“MBA”). This information must not be reproduced or transmitted in any form without the prior written consent of MBA. This information does not constitute investment advice or an investment recommendation of any kind and should not be relied upon as such. This information is general information only and it does not have regard to any person’s investment objectives, financial situation or needs. Before making any investment decision, you should seek independent investment, legal, tax, accounting or other professional advice as appropriate, and obtain the relevant Product Disclosure Statement and Target Market Determination for any financial product you are considering. This information does not constitute an offer or solicitation by anyone in any jurisdiction. This information is not an advertisement and is not directed at any person in any jurisdiction where the publication or availability of the information is prohibited or restricted by law. Past performance is not a reliable indicator of future performance. Any comments about investments are not a recommendation to buy, sell or hold. Any views expressed on individual stocks or other investments, or any forecasts or estimates, are point in time views and may be based on certain assumptions and qualifications not set out in part or in full in this information. The views and opinions contained herein are those of the authors as at the date of publication and are subject to change due to market and other conditions. Such views and opinions may not necessarily represent those expressed or reflected in other MBA communications, strategies or funds. Information derived from sources is believed to be accurate, however such information has not been independently verified and may be subject to assumptions and qualifications compiled by the relevant source and this information does not purport to provide a complete description of all or any such assumptions and qualifications. To the extent permitted by law, neither MBA, nor any of its related parties, directors or employees, make any representation or warranty as to the accuracy, completeness, reasonableness or reliability of the information contained herein, or accept liability or responsibility for any losses, whether direct, indirect or consequential, relating to, or arising from, the use or reliance on any part of this information. Neither MBA, nor any of its related parties, directors or employees, make any representation or give any guarantee as to the return of capital, performance, any specific rate of return, or the taxation consequences of, any investment. This information is current at 30 June 2023 and is subject to change at any time without notice. You can access MBA’s Financial Services Guide here for further information about any financial services or products which MBA may provide. © 2023 Maple-Brown Abbott Limited.