AI Regulation – An actuarial perspective
All over the world right now, there is debate about the regulation of AI.
This began gathering steam with the EU’s proposal for an AI Act during 2021 but has accelerated during the last few months, likely motivated by the release of (and incredible excitement over!) ChatGPT in late 2022 and a raft of other AI tools since.
While there is understandable enthusiasm, there are also concerns being expressed; in response, many countries, regions and authorities have published proposals or have begun consultation on AI regulation.
In Australia, a Discussion Paper on AI regulation[1] was recently released for comment, for which I was lucky to lead an Institute response. This article summarises the discussion paper and the Institute’s submission to it.
I’d like to give special thanks to the large group of dedicated volunteers who helped with this – whether you contributed ideas in discussions, helped with drafting or acted as a reviewer. You’re all superstars!
Themes From the Discussion Paper
The paper is reasonably high level, and I expect this is designed to encourage a breadth of perspectives in response. It notes that AI brings both opportunities and challenges, though far more attention is given to managing the challenges than promoting the opportunities.
While this is perhaps to be expected in a regulatory proposal, it could also reflect the recent emphasis on risks and downsides in the Australian discourse on AI. Fear has certainly dominated the headlines.
With that emphasis on managing downsides, the paper turns its attention to domestic developments on that front. Helpfully, it highlights a range of existing general-purpose and sector-specific regulations which already apply to AI systems.
While there are certainly issues to resolve in all that, not least the lack of guidance in many cases, we aren’t starting from nothing and, contrary to some popular views, AI-land is not an unregulated Wild West where anything goes.
The paper also notes a range of initiatives already occurring to promote good practices at both State and Federal levels. Worth keeping an eye on is the work of the National AI Centre, which the paper tells us has recently received more funding to extend its role in promoting responsible AI.
The paper then turns to international developments, where we can see some common themes but also some divergence in the directions taken. The EU’s approach of a broad regulation for ‘AI’ is one option that has got a lot of attention, but alternatives are also apparent.
The UK, for example, is promoting a model of a centralised expert agency playing something of a support and monitoring function to existing primary regulators. The US seems to be taking a multi-pronged approach in various areas at both State and Federal levels, while China is said to be promoting different sets of rules for different forms of AI systems. Initiatives from a host of other countries are also discussed – it quickly becomes apparent that global consensus on the appropriate regulatory model for AI does not yet exist.
The paper then naturally turns to the question of ‘what to do’. A range of potential initiatives are summarised, including examples where they exist. It was pleasing to see the recent joint work of the Institute and the Australian Human Rights Commission listed as an example of ‘regulator collaboration and engagement’.[2]
Interestingly, despite the broad analysis of the options taken all over the world, the discussion paper makes a fairly strong recommendation of a risk-tiering model, blending ideas from the EU and Canada. Many of the questions then posed in the paper relate to this proposal which is described as a ‘risk-based approach’, but for which we have considerable reservations, as noted below.
Our Response
Our response to this consultation aims to offer constructive critique: we identify flaws within some of the proposals while offering alternatives that we believe are more likely to be effective at meeting the intent of managing the risks of AI systems.
This perspective draws on our actuarial skillset – blending experience in risk management with the practical realities of data and AI work that many of us perform day to day.
I summarise our key recommendations below; for more detail I encourage you to read our submission in full.
1. Regulation should primarily be outcome-focused, rather than technology-focused to help ensure it can be enduring and long-lasting.
This is a point already made in a previous Institute submission on AI regulation.[3] Regulation centred on technology – particularly if it aims to protect against poor consumer outcomes – carries with it a range of problems. Notably, it will lead to complex overlaps and inconsistencies with existing sector-based regulation and, being reliant on a particular definition of a new and emerging technology, it will inevitably feature some undesirable loopholes which are likely to grow larger over time. Instead, wherever possible, we should regulate outcomes rather than particular means to those outcomes.
2. Risk-based approaches to AI regulation should:
- be based on a well-defined taxonomy of risks that AI systems may introduce or exacerbate;
- incorporate a well-defined menu of risk-management options that could be imposed by regulation;
- ensure the costs of risk-based regulatory interventions are justified by the risk-reduction created, without obvious gaps or overreach;
- carefully target risk-management interventions to the risks identified for each situation considered, rather than bluntly applying the same interventions across a broad, vaguely-defined risk category as proposed in the Discussion Paper.
This is the core of our submission. We have significant concerns with the proposed risk-tiering model, but we still think a risk-based approach to AI regulation can work. A truly risk-based approach must be grounded in thorough risk analysis and treatment, carefully matching interventions to the risks identified, with an eye on the costs and effectiveness of interventions. Instead, the Discussion Paper proposes a very blunt, broad, vaguely defined risk-classification and intervention scheme, which will inevitably result in insufficient intervention in some situations and wasted effort in others. We can and must do better; our proposal outlines one mechanism to do this.
3. Producing guidance on existing regulation should be prioritised over creating new regulation in situations where such regulation already exists.
The Discussion Paper notes that there is already a range of general-purpose and sector-specific laws that apply to AI systems. It appears likely that, in many cases, these regulations may be able to effectively manage the risks posed by AI already. However, in many cases there is insufficient guidance to practitioners on their application in the AI context. Whether or not we construct new AI regulations, producing this guidance for practitioners is clearly important. This should be addressed before we contemplate further legislation in areas where we already have such rules – our aim should be to use what we have and then sensibly reform if needed, not presume we are starting from scratch.
4. A centralised expert body should be created and appropriately funded to assist primary regulators in considering AI governance, regulation and guidance.
There is clearly a need for expertise in all this, and AI talent – particularly AI regulatory talent – is scarce. We propose a centralised expert body with three core functions:
-
- Conducting a risk analysis to support our risk-based model described above.
- Providing expertise to primary regulators, assisting with reform or guidance.
- Conducting ‘horizon scanning’, to allow us to be proactive about the risks of AI in a rapidly evolving world.
Conclusion
It is a very positive step to see Australia consulting widely on AI regulation. The range of perspectives that government is likely to receive in response to this Discussion Paper can only help Australia in taking a well-considered approach that reflects the diverse views of the community.
As risk experts and AI practitioners, actuaries involved in this response believe that a risk-based approach to AI regulation can work well, but the approach needs to adapt from that put forward by the Discussion Paper. Australia should aim to lead, not follow, ensuring its model for AI regulation is appropriate for Australia and does not merely copy approaches from different overseas contexts. We hope that our submission provides useful input to the government as they contemplate the reforms that AI surely requires. We look forward to continuing the conversation.
References
[1] https://consult.industry.gov.au/supporting-responsible-ai
[2] For more information, see https://actuaries.logicaldoc.cloud/download-ticket?ticketId=36aea01e-e5e5-4b08-9016-640051021053
[3] For example, see https://www.actuaries.asn.au/Library/Submissions/2022/Technology.pdf
CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.