Policy & Standards Lead
"Evidence-based policy. Not advocacy. Not speculation. Evidence."
I work at the boundary between empirical AI safety research and the regulatory instruments that govern what organisations can actually deploy. Regulators want certainty, researchers have probabilistic findings, and policymakers need language that holds up in a formal submission. Getting all three to converge without distorting any of them is what I do.
Key Contributions
- Authored a 15-document coordinated policy package spanning Safe Work Australia, EU AI Act Article 9, NIST AISIC, OECD, and Standards Australia IT-043 -- all backed by the same canonical evidence base
- Built the EU AI Act compliance readiness tool that flagged 8 RED and 2 AMBER gaps against Regulation (EU) 2024/1689 for embodied AI systems, ahead of the August 2026 enforcement deadline
- Drafted F1-STD-001 v0.1 -- a 728-line safety evaluation standard with SHALL requirements mapped to six regulatory frameworks (EU AI Act, NIST AI RMF, VAISS, NSW WHS, ISO 42001, ISO/TS 15066)
- Outlined a law review article synthesising 47 legal memos into a unified analysis of AI safety liability under Australian and EU law
- Integrated empirical findings into CCS 2026 submission language, ensuring all policy claims trace to reproducible queries against a versioned schema