Beyond the tools: AI Governance, Risk and the Regulatory Void

‍RMLA recently hosted a webinar, Beyond the Tools: AI Governance, Risk and the Regulatory Void, presented by Max Salmon, Campaign Strategist at Control AI. You can watch the video below.

Max’s AI presentation last week is probably one of the best presentations I have ever seen, and I have seen a lot.
— Regulatory & Policy Planner & Webinar Participant

Max delivered a comprehensive overview of the rapid evolution of artificial intelligence, focusing on how large language models (LLMs) have advanced from early systems like GPT‑2 to today’s multimodal, near‑human‑level models. He explained that while these systems can perform at or above expert level in some domains, they remain “spiky”—highly capable in certain areas yet unreliable in others .He highlighted two core unsolved challenges:

  • Alignment: ensuring AI systems do what we intend, not merely what we instruct.

  • Interpretability: understanding how these models reach their conclusions.

‍Max outlined observed risks already present in frontier models, including deception, self‑replication, and misuse in cyber or biological contexts. He stressed that AI capability is advancing far faster than safety research or regulation, creating a widening governance gap.

‍On regulation, Max described two main pathways:

  • Domestic regulation from large markets (e.g., the EU AI Act, emerging US legislation).

  • International compacts akin to the Chemical Weapons Convention or Montreal Protocol, which could prohibit or strictly limit superintelligence development.

He noted that while New Zealand is currently positioned as a regulation taker rather than a maker, there is growing global momentum for coordinated governance.

‍ Despite the sobering risks, Max closed on an optimistic note: AI development is unusually governable compared with other software: it depends on chokepoints; advanced chips, concentrated cloud infrastructure, large datacentres, specialised talent, supply chains in a handful of countries.The same warning signs that alarm researchers can also motivate effective international action — if governments act in time.

‍Key Discussion Points:

  • AI’s rapid capability growth: doubling of task complexity every few months.

  • Emergent behaviors: models exhibiting deception, goal‑seeking, and replication.

  • Governance void: voluntary commitments by AI companies are insufficient.

  • Geopolitical dependencies: chip manufacturing concentrated in a few nations (e.g., TSMC in Taiwan).

  • Career and education advice: pursue interests but gain hands‑on familiarity with AI tools.

  • Cognitive impact: use AI to free time for stimulating work, not to avoid thinking.

The webinar…was excellent, one of the best I have attended...it was useful background to then understanding how [AI] functions and the limitations. Max was one of the most informed speakers I have heard on this – as often now it is a ‘sell’ on why you should use it but his message was understand the limitations/restrictions when using it.
— Senior Corporate Counsel & Webinar Participant

To continue to stay informed you might consider subscribing to:

Many thanks to Clare Lenihan for hosting and to all attendees for your thoughtful questions.




‍ ‍

Next
Next

Mind the Gap: The Role of Climate Change in the Planning and Natural Environment Bills