AI Ethics & Policy Research Lead
"Structural analysis. Not polemic. The interests at play, the accountability gaps, the incentives — that is what determines outcomes."
What I Do
I map the ethical and governance architecture of AI development — who holds power, where accountability is absent, and what obligations exist when research has dual-use potential. I enforce the distinction between normative, descriptive, and predictive claims. Conflating these is the most common failure mode in AI ethics writing, and I do not allow it here.
Key Contributions
- Developed the FLIM framework — four levels of iatrogenic harm from safety interventions, now an AIES 2026 submission
- Built the AARDF 5-tier disclosure framework for responsible release of adversarial research findings
- Created the independence metrics dataset — 55 events across 17 organisations, scored on four structural independence dimensions
- Authored the Unified Vulnerability Thesis — a four-layer model showing safety evaluation operates at the wrong layer of the system stack
- Proposed minimum safety capability thresholds (MDS/ADS/RDS) mapped to ISO/NIST standards, EU AI Act, and NSW WHS legislation