The advent of Artificial Intelligence (AI) represents a significant technological advancement which will greatly impact companies, employees and the community. AI offers numerous applications and use cases that are anticipated to deliver positive societal outcomes, such as enhanced productivity. However, these benefits must be weighed up against the significant risks of embedding the use of AI inside organisations without appropriate process-driven, governance oversight and due consideration on the impact on human capital. Whilst many companies are sharing the potential benefits of AI use cases with investors, there appears to be little disclosure as to how companies plan to implement AI thoughtfully and responsibly to reduce implementation risks. An often-overlooked aspect of this implementation is worker displacement, which can seriously impact society and investment portfolios overall. Cost reductions achieved by one company may lead to diminished consumer demand across other holdings within the portfolio. Further, employees being displaced without appropriate consideration and strategic thought is likely to invite regulatory and union scrutiny particularly for large and high-profile companies thereby increasing the risk that AI will not be deployed in a fashion that has the maximum productivity benefits, Employee engagement levels could also be impacted. This all results in potentially significant reputational risk. This is why we advocate for a governance led implementation of AI that considers human capital as the world moves towards an AI economy.
Amidst the enthusiasm for deploying AI technologies at companies which could reduce costs, speed to market and increased productivity, it appears the risks of rapidly deploying this technology are not being adequately addressed and communicated to investors by companies. As this technology rapidly evolves investors must be attuned, aware of and engaging with investee companies on these material risks. This paper outlines some of these risks and provides suggestions on how investors can engage with management for increased transparency and oversight.
Key considerations for implementation
The principles for a Just Transition as it relates to decarbonisation has some interesting parallels with the transition towards an AI powered economy in which workers are displaced, namely the elements of engagement with stakeholders, focus on employment opportunities, and appropriate governance and oversight. However, there are also some key differences that must be considered.
- Engagement with stakeholders: Key stakeholders such as employees, unions, industry bodies and government should be consulted in the deployment of AI. This engagement can lower the risk of stifling innovation, encourage better outcomes and lower resistance to change.1 Obtaining insights from employees can help uncover new opportunities for efficiency gains and reduce poor decision making due to limited understanding of the scope of AI work. Engagement with stakeholders could ultimately lead to less regulatory and union involvement whilst also increasing employee’s engagement with productivity enhancing AI tools.
- Employment opportunities: Whilst an important focus for the climate-oriented Just Transition is for affected workers to be offered new employment opportunities at wages comparable to their prior role,2 in the AI Transition this may not be possible given the wider scope of workforce impacts. Companies who do not adapt to AI and restructure their workforces appropriately may be left in an uncompetitive position, leading to wider job losses across the company later if key customers or contracts are won by more agile competitors.
- Governance: It is essential that governance remains central throughout this transition. There must be not only appropriate board oversight but also the development of board skills in the implementation of AI. Stakeholders should be familiar with the mechanisms by which the board is kept informed about developments in AI deployment, as well as who holds ultimate responsibility for oversight. Plans for deployment of AI must have board oversight and then be shared with the affected parties. Larger companies should avoid rapid unstructured AI rollouts and instead focus on an incremental staged transition that allows for learning. The deployment of AI should be integrated into the risk and compliance framework and the company’s risk appetite statement.
Risks of AI deployment without considering human capital and governance risks
Although the implementation of AI within organisations presents several anticipated advantages, including potential productivity gains for the economy, it is essential to address the associated risks if these technologies are adopted without due consideration for the impact on affected employees. These risks include:
- Social license: a company’s reputation and brand value is an important intangible asset and any damage to these could be material. Backlash from employees, shareholders and customers could be material if stakeholders are not consulted.
- Deployment risk: If affected workers are not consulted, there is a risk that the company will deploy AI in a way that is ineffective and doesn’t maximize the opportunities that are present. For instance, employees may have better insight as to how AI can be best deployed for productivity within their role and where limitations may exist, enabling them to potentially anticipate unintended consequences of its implementation. Additionally, if employees are not consulted, their engagement may be limited, and the implemented tools may not be used to their full ability.
- Employee engagement: as investors, we focus on employee engagement as leading indicators of a company’s ability to retain top talent (an important consideration for companies whose competitive advantage is in their ability to attract and retain talent). However, if AI is rapidly redeployed without consideration of human capital risks, employee engagement could decline, and poorer staff retention may result. Additionally, employee morale among remaining staff members may decrease if they perceive a lack of support or concern from their employers.
- Unions, government and legal risks: If a human capital risks are not being considered by companies, we can expect increased union activity, particularly in industries which are typically characterised by low union involvement such as recent examples in the media of workers signing up for unions at technology companies in Australia.3 This will be a major change for many industries and companies may be unprepared for their impact. Additionally, participation by unions and government entities may constrain innovation and potentially limit companies’ productivity gains. For example, the Australian Council of Trade Unions (ACTU) is seeking tougher regulations saying recently in its July, 2025 press release,4 “Unions will pursue a pro-job pro-worker agenda in the adoption of AI to ensure that it is safe and deployed in a way that gives workers a stake in the gains while being transparent and fair.” Companies need to be cognisant of this view, rather than attempting to deal with unions only after AI use case roll out.
By way of an interesting recent example, Commonwealth Bank (CBA) has been an early adopter of AI amongst Australian listed financial services companies. During 2023 it was working up 50+ generative AI use cases in an offline safe environment as a part of its partnership with H20.ai.5 More recently, the company announced it has teamed up with OpenAI in a multi-year agreement.6 Recently the company said it intended to make 45 employees redundant due to the introduction of AI in its call centres, only to retract that a month later stating it had reviewed the situation and had erred in its initial actions. 45 redundancies out of a workforce of almost 50,000 employees isn’t commercially material, however this serves as an early example of the risks associated with the transition of AI use cases into the business leading to lower employee numbers. The Finance Sector Union had taken CBA to the Fair Work Commission which brought the matter to a head.
From its disclosures, the Board Risk and Compliance Committee is responsible for managing risks relating to AI. CBA says it listens to the concerns of stakeholders which can inform its response on issues such as AI. AI is stated as a high priority for customers, employees, communities, suppliers, industry groups and society. Interestingly, it is not stated as a priority for investors and shareholders.
CBA also carries the extra “burden” of being a large employer, being politically prominent and serving more than 17 million customers. This issue attracted extensive media interest as at the same time the CBA, CEO was taking part in the Productivity Forum in Canberra. It is also worth noting The Australian Prudential Regulation Authority (APRA) published its corporate plan for 2025-26 recently, which listed strengthening cyber resilience – including monitoring AI adoption – as a key strategic priority.7
It is likely that future proposed changes CBA wants to make with its cost base will now be more heavily scrutinised. Perhaps staff morale has taken a hit. The next step for CBA may be to review its processes and look to develop a transition framework to consider for the future given it appears inevitable that employment levels and types will be impacted by AI adoption moving forward.
What does a governance led approach look like
As an investor we are actively considering the risks and opportunities associated with the deployment of AI at our portfolio companies. This is an area we intend to focus on for engagement and there are several key indicators of thoughtful implementation and good corporate governance we will focus on:
- Responsible AI policy: It is essential for any organisation employing AI to establish a responsible AI policy that clearly articulates its approach to the technology and prioritises minimising potential risks. This policy should be informed by relevant country and region based AI policies, such as the Australian Government AI Ethics Principles.8
- Employee rights: In our engagements with companies, we assess how they address employee rights and involve staff in the implementation of AI within their roles. Limited engagement is a sign of poor management of this technology and could be a leading indicator for the risks outlined above.
- Employment opportunities: if companies are considering workforce reductions as a result of these technologies, they should be able to articulate how this will be achieved and whether support will be provided to impacted employees. . This may include outplacement, training and development or redeployment within the organisation. Natural attrition could present an opportunity to deploy AI efficiencies with little workforce impact, however this requires thoughtful forward planning. Based on our experience, organisations typically possess a clear understanding of which areas may be affected; therefore, it is essential that they initiate preparations for displaced workers at an early stage.
- Engagement with unions: engagement with unions is critical especially in industries where union activity has typically been unusual. Companies should be able to demonstrate to investors how they maintain an open dialogue and a working relationship with unions to minimise some of the union-related risks we outline above.
- Human rights and diversity: an important consideration for companies is the impact on diversity of the deployment of AI. As workforce reductions occur, there may be a specific group that will be disproportionately impacted and measures should be considered to address this. Organisations are advised to review and update their human rights and diversity policies in response to these developments.
- Measurement of success: Companies should have quantitative and qualitative ways to measure the progress of the implementation of AI and share feedback from stakeholders to the board and investors. This approach enables stakeholders to clearly identify areas where opportunities for AI implementation and operational efficiencies may exist within a specific company.
Conclusion
We are in the early stages of this significant transformation and the considerations we make for portfolio companies are likely to evolve as it becomes clearer how companies are deploying AI. This paper provides a useful guide for companies regarding investor expectations and serves as a foundation for how we as investors should be considering the risks and opportunities associated with the deployment of AI use cases.
1 World Benchmarking Alliance (2025) ‘Assessing the ‘just’ in corporate transition plans: framework and guidance’. Accessed 4 September 2025.
2 IGCC (2024) ‘Investor Expectations for the Just Transition’. Accessed 24 September 2025.
3 McGuire, A. (2025) ‘Canva, Atlassian employees flock to unions amid AI job fears. Australian Financial Review’, 23 May 2025. Accessed 4 September 2025.
4 Australian Council of Trade Unions (ACTU) (2025) ‘Unions seek enforceable agreements on the use of AI’. Media Release, 29 July 2025. Accessed 4 September 2025.
5 CBA Annual Reports, Results Presentations and website.
6 CBA Annual Reports, Results Presentations and website.
7 Australian Prudential Regulation Authority (2025). ‘APRA Corporate Plan 2025–2026’. Accessed 16 September 2025.
8 Department of Industry, Science and Resources (2024) ‘Australia’s AI Ethics Principles’. Accessed 4 September2025
Disclaimer. This document was prepared and issued by Maple-Brown Abbott Ltd ABN 73 001 208 564, AFSL No. 237296 (“MBA”). This information is general information only and it does not have regard to any person’s investment objectives, financial situation or needs. Before making any investment decision, you should seek independent investment, legal, tax, accounting or other professional advice as appropriate, and obtain the relevant Product Disclosure Statement and Target Market Determination for any financial product you are considering. This information does not constitute an offer or solicitation by anyone in any jurisdiction. Past performance is not a reliable indicator of future performance. Any views expressed on individual stocks or other investments, or any forecasts or estimates, are point in time views and may be based on certain assumptions and qualifications not set out in part or in full in this information. The views and opinions contained herein are those of the authors as at the date of publication and are subject to change due to market and other conditions. Such views and opinions may not necessarily represent those expressed or reflected in other MBA communications, strategies or funds. Any companies, securities and or/case studies referenced or discussed are used only for illustrative purposes. The information provided is not a recommendation for any particular security or strategy, and is not an indication of the trading intent of MBA. Information derived from sources is believed to be accurate, however such information has not been independently verified and may be subject to assumptions and qualifications compiled by the relevant source and this information does not purport to provide a complete description of all or any such assumptions and qualifications. To the extent permitted by law, neither MBA, nor any of its related parties, directors or employees, make any representation or warranty as to the accuracy, completeness, reasonableness or reliability of the information contained herein, or accept liability or responsibility for any losses, whether direct, indirect or consequential, relating to, or arising from, the use or reliance on any part of this information. This information is current at 10 October 2025 and is subject to change at any time without notice.
© 2025 Maple-Brown Abbott Limited.
Interested in investing with us?
Investment Insights
The growing power of US electric utilities – insights from our trip
Governance first: Navigating the human capital risks of AI deployment
The US Environmental Protection Agency’s deregulatory shift and its implications for US utilities
