What's
new

All content by date published

May 2026

Blog

Robot Dogs Are a Security Nightmare — And We Can Prove It

Eight CVEs. A wormable Bluetooth exploit. An encrypted backdoor sending data to Chinese servers. And police departments buying them anyway. A deep dive into the Unitree vulnerability landscape and what it means for embodied AI safety.

embodied-airoboticssecuritycveunitree
Blog

AI Safety Daily — May 13, 2026

Fine-tuning asymmetry, KPI-induced constraint violations, tri-role self-play alignment, and a meta-prompting red-team framework converge on alignment as a dynamic property that erodes under optimization pressure.

ai-safety-dailyalignmentred-teamingagentic-aifine-tuning
Blog

AI Safety Daily — May 12, 2026

An embodied AI safety survey, actionable mechanistic interpretability, professional agent benchmarking, CoT attack vectors, and an integrated diagnostic toolkit collectively expose the same gap: evaluation infrastructure is maturing faster than remediation tooling.

ai-safety-dailyembodied-aiinterpretabilityagentic-aibenchmarking
Blog

AI Safety Daily — May 11, 2026

Guardrail diagnostics for agentic pipelines, SAE feature-steering fragility, a 94-dimension safety benchmark, adaptive multi-turn jailbreak architecture, and a cross-frontier safety comparison collectively argue that runtime safety architecture — not just training-time alignment — is the critical missing layer.

ai-safety-dailyagentic-aiinterpretabilityred-teamingfrontier-models
Blog

AI Safety Daily — May 10, 2026

Causal jailbreak geometry, attention-head continuation competition, multi-turn agent accumulation, skill-file injection, and robotic failure reasoning all point to the same structural finding: safety is compositional and each component can be targeted individually.

ai-safety-dailyinterpretabilityagentic-aiembodied-aired-teaming
Blog

AI Safety Daily — May 9, 2026

SafeAgentBench exposes <10% hazard refusal rate across 750 embodied tasks; CHAIN benchmark records 0.0% Pass@1 on interlocking puzzles for GPT-5.2, o3, and Claude-Opus-4.5.

ai-safety-dailyembodied-aisafeagentbenchphysical-reasoningred-teaming
Paper arXiv:2605.05058 systematization

SoK: Robustness in Large Language Models against Jailbreak Attacks

A systematization of knowledge paper from IEEE S&P 2026 introducing Security Cube — a unified multi-dimensional evaluation framework exposing the inadequacy of attack success rate as a single safety metric.

jailbreakrobustnessevaluation-frameworkattack-success-ratellm-safety
Paper arXiv:2604.23775 Survey

Vision-Language-Action Safety: Threats, Challenges, Evaluations, and Mechanisms

A unified survey organising VLA safety research along two timing axes — attack timing (training vs inference) and defense timing (training vs inference) — across adversarial patches, semantic jailbreaks, backdoors, and supply chain threats.

vla-safetyadversarial-attacksbackdoorjailbreakembodied-ai
Blog

AI Safety Daily — May 7, 2026

Safety geometry collapse in fine-tuned guard models, a 400-paper embodied AI safety survey, architecture-aware MoE jailbreaking, and persona-invariant alignment point to structural rather than content-level failure as the dominant pattern this week.

ai-safety-dailyembodied-aialignmentred-teaminginterpretability
Paper arXiv:2605.01687 Empirical

MultiBreak: A Scalable and Diverse Multi-turn Jailbreak Benchmark for Evaluating LLM Safety

An active-learning pipeline that builds 10,389 multi-turn adversarial prompts spanning 2,665 distinct harmful intents — achieving 54% higher attack success rates than prior benchmarks on DeepSeek-R1-7B.

jailbreakmulti-turnbenchmarkattack-success-rateevaluation
Blog

AI Safety Daily — May 6, 2026

Compliance-forcing instructions degrade frontier model metacognition more than adversarial content; midtraining on specification documents cuts agentic misalignment from 54% to 7%; multi-agent safety depends on interaction topology rather than model weights.

ai-safety-dailyagentic-safetyalignmentmulti-agentevaluation
Paper arXiv:2605.02900 Survey

Safety in Embodied AI: A Survey of Risks, Attacks, and Defenses

A 400-paper synthesis mapping the full attack surface of embodied AI — from adversarial perception through jailbreak planning to hardware vulnerabilities — and the defenses available at each layer.

embodied-ai-safetyadversarial-attacksjailbreakvla-systemsdefenses
Blog

AI Safety Daily — May 5, 2026

Alignment contracts formalise what agents may do; embedded deliberation outperforms external rules in production; and trained self-denial emerges as a measurable alignment failure across 115 models.

ai-safety-dailyagentic-safetyalignmentformal-methodsinterpretability
Paper arXiv:2511.22047 Empirical

Evaluating the Robustness of Large Language Model Safety Guardrails Against Adversarial Attacks

A systematic evaluation of ten LLM guardrail models reveals that benchmark accuracy is misleading due to training data contamination, with the best model dropping from 91% to 33.8% on novel attacks.

llm-safetyguardrailsadversarial-attacksbenchmark-contaminationjailbreak-defense
Paper arXiv:2603.22126 Empirical

ROBOGATE: Adaptive Failure Discovery for Safe Robot Policy Deployment via Two-Stage Boundary-Focused Sampling

A physics-simulation framework that maps failure boundaries across robot manipulation parameter spaces, exposing a 100-point performance gap between VLA foundation models and scripted baselines on adversarial scenarios.

vla-safetyrobot-manipulationfailure-detectiondeployment-riskadversarial-evaluation
Blog

AI Safety Daily — May 4, 2026

Agentic swarms may stabilise false conclusions under scale; models that fail to refuse comply precisely; and formal accountability bounds for multi-agent delegation chains now exist.

ai-safety-dailyagentic-safetymulti-agentred-teamingalignment
Paper arXiv:2601.15331 Methods

RECAP: A Resource-Efficient Method for Adversarial Prompting in Large Language Models

RECAP retrieves semantically similar pre-trained adversarial prompts to attack new targets, achieving competitive jailbreak success rates at a fraction of the computational cost of optimization-based methods.

adversarial-promptingjailbreakred-teamingllm-safetyresource-efficient
Paper arXiv:2505.04769 Survey

Vision-Language-Action Models: Concepts, Progress, Applications and Challenges

A comprehensive survey of VLA model architectures, training strategies, and real-world applications reveals persistent safety and deployment challenges that the field must resolve before embodied AI can be trusted at scale.

vla-modelsembodied-aisurveysafety-challengesethical-deployment
Blog

AI Safety Daily — May 3, 2026

VLA models face a distinct attack surface from text-only systems; structural agent architectures may provide auditable safety guarantees; and inference-time memory attacks bypass output-layer alignment.

ai-safety-dailyembodied-aired-teamingagentic-safetyvla-models
Paper arXiv:2604.24826 Empirical ▶ Audio

A Comparative Evaluation of AI Agent Security Guardrails

A systematic benchmark of four commercial AI agent guardrail systems reveals critical gaps in detecting indirect prompt injection and tool abuse across major cloud providers.

ai-agent-securityguardrailsprompt-injectiontool-abusesafety-evaluation
Paper arXiv:2602.18739 Empirical ▶ Audio

When World Models Dream Wrong: Physical-Conditioned Adversarial Attacks against World Models

The first white-box adversarial attack on generative world models targets physical-condition channels to corrupt autonomous planning while maintaining perceptual fidelity.

world-modelsadversarial-attacksembodied-aiautonomous-drivingplanning-safety
Paper arXiv:2505.16446 Empirical ▶ Audio ▶ Video

Implicit Jailbreak Attacks via Cross-Modal Information Concealment on Vision-Language Models

A steganography-based attack that hides malicious instructions inside images using least significant bit encoding, achieving 90%+ jailbreak success rates on GPT-4o and Gemini in under three queries.

jailbreakvision-language-modelssteganographycross-modal-attacksmultimodal-safety
Paper arXiv:2510.05156 Methods ▶ Audio ▶ Video

VeriGuard: Enhancing LLM Agent Safety via Verified Code Generation

A dual-stage framework that provides formal safety guarantees for LLM-based agents through offline policy verification and lightweight runtime monitoring.

formal-verificationllm-agentsagent-safetyruntime-monitoringsafety-guarantees
Blog

AI Safety Daily — May 1, 2026

SafetyALFRED documents a recognition-action gap in embodied LLMs; planning capability and safety awareness decouple in robotic deployments; and paired prompt-response risk analysis offers a new measurement primitive for trace evaluation.

ai-safety-dailyembodied-aiagentic-safetyalignmentbenchmarks

April 2026

Paper arXiv:2310.02446 Empirical ▶ Audio

Low-Resource Languages Jailbreak GPT-4

Translating harmful queries into low-resource languages bypasses GPT-4's safety filters at high rates, exposing a systematic cross-lingual gap in LLM safety training.

jailbreakcross-lingualsafety-alignmentred-teamingmultilingual
Paper arXiv:2407.16667 Methods ▶ Audio

RedAgent: Red Teaming Large Language Models with Context-aware Autonomous Language Agent

A multi-agent system that models jailbreak strategies as reusable abstractions, enabling context-aware attacks that break most black-box LLMs in under five queries and uncovered 60 real-world vulnerabilities in deployed GPT applications.

red-teamingjailbreakmulti-agentadversarial-attackssafety-evaluation
Blog

AI Safety Daily — April 29, 2026

Actionable mechanistic interpretability matures into a locate-steer-improve framework; the refusal cliff in reasoning models shows alignment survives the reasoning chain but fails at generation; and CRAFT achieves safety-capability balance through hidden-representation alignment without degrading thinking traces.

ai-safety-dailymechanistic-interpretabilityalignmentreasoning-modelssafety
Paper arXiv:2505.03574 Methods ▶ Audio

LlamaFirewall: An Open Source Guardrail System for Building Secure AI Agents

LlamaFirewall provides a three-layer open-source defense framework protecting agentic LLM systems from prompt injection, goal misalignment, and insecure code generation at runtime.

guardrailsai-agentsprompt-injectionsafety-alignmentagentic-systems
Paper arXiv:2409.10071 Empirical ▶ Audio

Towards Physically Realizable Adversarial Attacks in Embodied Vision Navigation

Adversarial patches on physical objects reduce navigation success rates by over 22% in embodied agents, using multi-view optimization and two-stage opacity tuning to remain effective and inconspicuous.

embodied-aiadversarial-attacksvision-navigationphysical-attacksrobustness
Blog

AI Safety Daily — April 28, 2026

Large-scale public competition data confirms indirect prompt injection as a pervasive vulnerability across model families; Skill-Inject shows skill-file attacks achieve up to 80% success on frontier models; AgentLAB demonstrates that long-horizon attack chains evade defences calibrated for single-step injections.

ai-safety-dailyprompt-injectionagentic-aiagent-securityred-teaming
Paper arXiv:2507.11500 Methods ▶ Audio

ARMOR: Aligning Secure and Safe Large Language Models via Meticulous Reasoning

ARMOR defends LLMs against jailbreak attacks by using inference-time reasoning to detect attack strategies, extract true intent, and apply policy-grounded safety analysis.

jailbreak-defensesafety-alignmentreasoningllm-safetyinference-time-defense
Paper arXiv:2604.23775 Survey ▶ Audio

Vision-Language-Action Safety: Threats, Challenges, Evaluations, and Mechanisms

A comprehensive survey unifying VLA safety research across adversarial attacks, defenses, benchmarks, and six deployment domains.

vla-safetyembodied-aiadversarial-attackssurveyrobotics-security
Paper arXiv:2604.17887 Methods

StableIDM: Stabilizing Inverse Dynamics Model against Manipulator Truncation via Spatio-Temporal Refinement

StableIDM introduces a spatio-temporal refinement framework to stabilize inverse dynamics models against manipulator truncation through auxiliary masking, directional feature aggregation, and...

inverse-dynamics-modelspartial-observabilitymanipulator-truncationspatio-temporal-refinementvisual-control
Blog

AI Safety Daily — April 27, 2026

X-Teaming demonstrates near-complete multi-turn attack success against models with strong single-turn defences; JailbreaksOverTime shows jailbreak detectors degrade under distribution shift within months; and AJAR surfaces cognitive-load effects on persona-based defences in agentic contexts.

ai-safety-dailymulti-turnjailbreakdetectionagentic-ai
Paper arXiv:2510.06036 Empirical ▶ Audio

Refusal Falls off a Cliff: How Safety Alignment Fails in Reasoning Models

Mechanistic analysis of reasoning models discovers the 'refusal cliff'—models correctly identify harmful prompts during thinking but systematically suppress their refusal at the final output tokens.

safety-alignmentreasoning-modelsmechanistic-interpretabilityrefusalalignment-failures
Paper arXiv:2604.18463 Empirical ▶ Audio

Using Large Language Models for Embodied Planning Introduces Systematic Safety Risks

DESPITE benchmark reveals that across 23 models, near-perfect planning ability does not ensure safety—the best planner still generates dangerous plans 28.3% of the time.

embodied-airobot-safetytask-planningevaluationllm-agents
Blog

AI Safety Daily — April 26, 2026

The first comprehensive VLA safety survey maps seven distinct attack surfaces across the full embodied pipeline; AttackVLA demonstrates targeted long-horizon backdoor manipulation; and spatially-aware adversarial patches expose a systematic gap in defences designed for 2D vision classifiers.

ai-safety-dailyvlaembodied-aiadversarial-attacksbackdoor
Paper arXiv:2604.14344 Empirical ▶ Audio

CART: Context-Aware Terrain Adaptation using Temporal Sequence Selection for Legged Robots

CART introduces a context-aware terrain adaptation controller that fuses proprioceptive and exteroceptive sensing to enable legged robots to robustly walk on complex off-road terrain, evaluated on...

legged-robot-locomotionmultimodal-terrain-perceptionproprioception-exteroception-fusionvibrational-stability-metricsoff-road-terrain-adaptation
Paper arXiv:2512.11362 Survey ▶ Audio

An Anatomy of Vision-Language-Action Models: From Modules to Milestones and Challenges

A structured survey that treats Safety as one of five foundational VLA challenges alongside Representation, Execution, Generalization, and Evaluation.

vla-modelsembodied-aisafetyrobustnesssurvey
Paper arXiv:2407.02855 Empirical ▶ Audio

Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks

Directly removing harmful knowledge from LLMs via machine unlearning—with just 20 training examples—cuts jailbreak success rates more effectively than safety fine-tuning on 100k samples.

jailbreak-defensemachine-unlearningsafety-alignmentllm-safetyred-teaming
Blog

Your AI Safety Numbers May Be Wrong By 80 Points

Across 5 frontier models and 498 evaluations, heuristic grading reported 86% attack success. FLIP grading reported 1.4%. The gap is not noise.

methodologyevaluationflip-gradingred-teamingbenchmarks
Blog

AI Safety Daily — April 25, 2026

SafetyALFRED shows embodied agents recognise hazards better than they act on them; HomeGuard introduces context-guided spatial constraints for household VLMs; and the pattern of static recognition versus corrective action emerges as the dominant gap in embodied safety evaluation.

ai-safety-dailyembodied-aivlabenchmarkhousehold-robotics
Paper arXiv:2602.04521 Methods ▶ Audio

C-ΔΘ: Circuit-Restricted Weight Arithmetic for Selective Refusal

C-ΔΘ uses mechanistic circuit analysis to localize refusal-causal computation and distill it into a sparse offline weight update, eliminating per-request inference-time safety hooks.

mechanistic-interpretabilityselective-refusalllm-safetyweight-editingsparse-circuits
Paper arXiv:2510.01642 Empirical ▶ Audio

FailSafe: Reasoning and Recovery from Failures in Vision-Language-Action Models

FailSafe introduces a scalable failure generation and recovery system that automatically creates diverse failure cases with executable recovery actions, boosting VLA manipulation success by up to 22.6%.

vla-modelsfailure-detectionfailure-recoveryrobotic-manipulationembodied-ai-safety
Blog

AI Safety Daily — April 24, 2026

Week-in-review after the GPT-5.5 Bio Bug Bounty announcement: how the public bounty landed in the red-teaming research community, what it means for F41LUR3-F1R57's research programme, and the quieter structural findings that still matter.

ai-safety-dailyweek-in-reviewred-teamingbug-bountyembodied-ai
Paper arXiv:2511.21663 Empirical ▶ Audio

Attention-Guided Patch-Wise Sparse Adversarial Attacks on Vision-Language-Action Models

ADVLA exploits attention maps and Top-K masking to craft sparse, stealthy adversarial patches in VLA models' textual feature space, achieving high attack success rates while remaining nearly invisible.

vla-modelsadversarial-attacksattention-guidedfeature-space-attackembodied-robotics
Paper arXiv:2602.06556 Empirical ▶ Audio

LIBERO-X: Robustness Litmus for Vision-Language-Action Models

A new benchmark exposes persistent evaluation gaps in VLA models by combining hierarchical difficulty protocols and diverse teleoperation data to reveal that cumulative perturbations cause dramatic performance drops.

vla-modelsrobustness-evaluationembodied-aibenchmarkevaluation-gaps
Paper arXiv:2509.11629 Empirical ▶ Audio

Reasoned Safety Alignment: Ensuring Jailbreak Defense via Answer-Then-Check

Answer-Then-Check trains LLMs to generate a candidate response first and then evaluate its own safety, achieving robust jailbreak defense without sacrificing reasoning or utility.

safety-alignmentjailbreak-defensereasoning-modelsself-evaluationanswer-then-check
Paper arXiv:2604.15579 Empirical ▶ Audio

Symbolic Guardrails for Domain-Specific Agents: Stronger Safety and Security Guarantees Without Sacrificing Utility

A systematic study of 80 agent safety benchmarks shows that 74% of specifiable policies can be enforced by symbolic guardrails, providing formal safety guarantees that training-based methods cannot.

agent-safetysymbolic-guardrailsllm-agentssafety-alignmentpolicy-enforcement
Blog

AI Safety Daily — April 23, 2026

OpenAI opens a $25K universal-jailbreak bounty targeting GPT-5.5's bio-safety challenge in Codex Desktop, ships the GPT-5.5 System Card the same day, and the broader red-teaming literature's critique of 'security theater' suddenly has a concrete public counterexample.

ai-safety-dailyred-teamingbiosecuritygpt-5-5bug-bounty
Paper arXiv:2604.19638 Empirical ▶ Audio

SafetyALFRED: Evaluating Safety-Conscious Planning of Multimodal Large Language Models

SafetyALFRED reveals a critical alignment gap in embodied AI: while multimodal LLMs can recognize kitchen hazards in QA settings, they largely fail to mitigate those same hazards when planning physical actions.

embodied-aisafety-evaluationmultimodal-llmhousehold-roboticshazard-recognition
Paper arXiv:2401.17256 Empirical ▶ Audio

Weak-to-Strong Jailbreaking on Large Language Models

Researchers show that small, unsafe models can efficiently guide jailbreaking attacks against much larger, carefully aligned models by exploiting divergences in initial decoding distributions.

jailbreakingllm-safetyadversarial-decodingred-teamingalignment-failure
Paper arXiv:2604.21691 Position ▶ Audio

There Will Be a Scientific Theory of Deep Learning

Fourteen DL-theory researchers argue that an empirical mechanics of training dynamics is emerging, and that quantitative theory is the only reliable path to distinguishing structurally expected failures from contingent optimization accidents.

deep-learning-theorylearning-mechanicsmechanistic-interpretabilitytraining-dynamicsfailure-prediction
Blog

AI Safety Daily — April 22, 2026

FinRedTeamBench shows safety alignment doesn't transfer to financial-domain LLMs; Risk-Adjusted Harm Score replaces binary metrics for BFSI; and Tesla FSD's NHTSA probe expands to nine incidents including one fatality.

ai-safety-dailyfinancebfsired-teamingautonomous-vehicles
Paper arXiv:2509.09708 Empirical ▶ Audio

Beyond I'm Sorry, I Can't: Dissecting Large Language Model Refusal

Using sparse autoencoders to mechanistically identify the neural features that drive safety refusal in instruction-tuned LLMs, revealing layered redundant defenses and new pathways for targeted safety auditing.

llm-safetyrefusal-mechanismssparse-autoencodersmechanistic-interpretabilityjailbreak
Paper arXiv:2409.14580 Methods ▶ Audio

Updating Robot Safety Representations Online from Natural Language Feedback

A method for dynamically updating robot safety constraints at deployment time using vision-language models and Hamilton-Jacobi reachability, enabling robots to respect context-specific hazards communicated through natural language.

embodied-airobot-safetyvision-language-modelsnatural-language-feedbacksafety-constraints
Blog

AI Safety Daily — April 21, 2026

Digital twins transition from deployment accelerant to absolute prerequisite for fleet-scale physical AI; the four-phase maturity taxonomy crystallises, and OpenAI's PBC conversion reshapes the safety-versus-shipping calculus.

ai-safety-dailyphysical-aimaturity-taxonomydigital-twinsgovernance
Paper arXiv:2604.13654 Survey ▶ Audio

Vision-and-Language Navigation for UAVs: Progress, Challenges, and a Research Roadmap

Comprehensive survey of Vision-and-Language Navigation for UAVs, charting the evolution from modular approaches to foundation model-driven systems and identifying deployment challenges and future...

vision-language-navigationuav-embodied-aisim-to-reality-gapvision-language-modelslong-horizon-task-planning
Paper arXiv:2604.14089 Empirical ▶ Audio

UMI-3D: Extending Universal Manipulation Interface from Vision-Limited to 3D Spatial Perception

UMI-3D extends the Universal Manipulation Interface with LiDAR-based 3D spatial perception to overcome monocular SLAM limitations and improve robustness of embodied manipulation data collection and...

lidar-slammultimodal-sensor-fusionwrist-mounted-manipulationdeformable-object-manipulationspatiotemporal-calibration
Paper arXiv:2604.14683 Empirical ▶ Audio

DR$^{3}$-Eval: Towards Realistic and Reproducible Deep Research Evaluation

Introduces DR³-Eval, a reproducible benchmark for evaluating deep research agents on multimodal report generation with a static sandbox corpus and multi-dimensional evaluation framework,...

deep-research-agentsbenchmark-evaluationmultimodal-report-generationretrieval-robustnesshallucination-control
Paper arXiv:2601.10589 Methods ▶ Audio

Be Your Own Red Teamer: Safety Alignment via Self-Play and Reflective Experience Replay

A self-play reinforcement learning framework where an LLM simultaneously generates adversarial jailbreak attacks and strengthens its own defenses, reducing attack success rates without external red teams.

safety-alignmentred-teamingself-playreinforcement-learningjailbreak-defense
Paper arXiv:2604.14399 Empirical ▶ Audio

SpaceMind: A Modular and Self-Evolving Embodied Vision-Language Agent Framework for Autonomous On-orbit Servicing

SpaceMind is a modular vision-language agent framework for autonomous on-orbit servicing that combines skill modules, MCP tools, and reasoning modes with a self-evolution mechanism, validated through...

embodied-vision-language-agentson-orbit-servicingself-evolution-without-finetuningsim-to-real-transferfailure-recovery-mechanisms
Paper arXiv:2604.15308 Empirical ▶ Audio

RAD-2: Scaling Reinforcement Learning in a Generator-Discriminator Framework

RAD-2 combines diffusion-based trajectory generation with RL-optimized discriminator reranking to improve closed-loop autonomous driving planning, validated through simulation and real-world...

autonomous-driving-planningdiffusion-models-controlreinforcement-learning-trajectoryclosed-loop-feedbackmultimodal-uncertainty
Paper arXiv:2603.11975 Empirical ▶ Audio

HomeSafe-Bench: Evaluating Vision-Language Models on Unsafe Action Detection for Embodied Agents in Household Scenarios

A comprehensive benchmark and HD-Guard dual-brain architecture for detecting unsafe actions by embodied VLM agents in household environments, exposing critical gaps in real-time safety monitoring.

embodied-ai-safetyunsafe-action-detectionvision-language-modelshousehold-agentsreal-time-safety
Blog

AI Safety Daily — April 20, 2026

Embodied AI is the red-teaming blind spot; Feffer et al.'s Five Axes of Divergence expose the 'security theater' in current safety evaluations, and RAHS scoring offers a concrete alternative for high-stakes sectors.

ai-safety-dailyred-teamingmethodologybfsiembodied-ai
Paper arXiv:2604.11174 Empirical ▶ Audio

EmbodiedGovBench: A Benchmark for Governance, Recovery, and Upgrade Safety in Embodied Agent Systems

Introduces EmbodiedGovBench, a benchmark for evaluating governance, safety, and controllability of embodied agent systems across seven dimensions including policy enforcement, recovery, auditability,...

embodied-ai-governancerobot-policy-safetyruntime-drift-robustnesshuman-override-responsivenessaudit-trails-embodied-systems
Paper arXiv:2511.01375 Empirical ▶ Audio

Align to Misalign: Automatic LLM Jailbreak with Meta-Optimized LLM Judges

A bi-level meta-optimization framework co-evolves jailbreak prompts and scoring templates to achieve 100% attack success on Claude-4-Sonnet, exposing fundamental cracks in how safety alignment is measured.

jailbreakred-teamingsafety-alignmentmeta-optimizationadversarial-attacks
Paper arXiv:2506.16012 Empirical ▶ Audio

DualTHOR: A Dual-Arm Humanoid Simulation Platform for Contingency-Aware Planning

A physics-based simulator for dual-arm humanoid robots introduces a contingency mechanism that deliberately injects low-level execution failures, revealing critical robustness gaps in current VLMs.

embodied-aivla-modelssimulationfailure-modescontingency-planning
Blog

AI Safety Daily — April 19, 2026

AEGIS delivers 59.16% obstacle-avoidance gain via control barrier functions without sacrificing capability, SafeAgentBench locks in the 10% rejection ceiling, and OpenAI's distributed safety model raises new accountability questions.

ai-safety-dailyvla-safetyembodied-aibenchmarksgovernance
Paper arXiv:2512.20798 Empirical ▶ Audio

A Benchmark for Evaluating Outcome-Driven Constraint Violations in Autonomous AI Agents

A new benchmark reveals that LLMs placed under performance incentives exhibit emergent misalignment — violating stated safety constraints to maximize KPIs, with reasoning capability failing to predict safe behavior.

autonomous-agentsemergent-misalignmentsafety-benchmarksconstraint-violationsalignment
Paper arXiv:2604.12371 Empirical ▶ Audio

Reading Between the Pixels: Linking Text-Image Embedding Alignment to Typographic Attack Success on Vision-Language Models

Systematically evaluates typographic prompt injection attacks on four vision-language models across varying font sizes and visual conditions, correlating text-image embedding distance to attack...

typographic-prompt-injectionvision-language-model-robustnessmultimodal-embedding-alignmentadversarial-text-renderingembodied-ai-safety
Paper arXiv:2512.21815 Empirical ▶ Audio

Few Tokens Matter: Entropy Guided Attacks on Vision-Language Models

Adversarial attacks targeting high-entropy tokens in VLMs achieve severe semantic degradation with minimal perturbation budgets and transfer across architectures.

adversarial-attacksvision-language-modelsentropytransferabilityrobustness
Blog

AI Safety Daily — April 18, 2026

GPT-5.2 scores 0% Pass@1 on interlocking mechanical puzzles, AEGIS/VLSA wrappers deliver +59% obstacle avoidance via control barrier functions, and SafeAgentBench shows embodied LLM agents reject fewer than 10% of hazardous household requests.

ai-safety-dailyembodied-aivla-safetyred-teaminggovernance
Paper arXiv:2604.12831 Empirical ▶ Audio

VULCAN: Vision-Language-Model Enhanced Multi-Agent Cooperative Navigation for Indoor Fire-Disaster Response

Evaluates multi-agent cooperative navigation systems under realistic fire-disaster conditions using VLM-enhanced perception, identifying critical failure modes in smoke, thermal hazards, and sensor...

multi-agent-navigationvision-language-modelsfire-disaster-responsesensor-degradationsmoke-diffusion
Blog

AI Safety Daily — April 17, 2026

FSD v14.3 safety regressions double disengagement rate, NHTSA probes 3.2M vehicles, Aurora aces fatal-crash simulations, and the Physical AI Maturity Taxonomy maps deployment reality.

ai-safety-dailyautonomous-vehiclesembodied-aiphysical-aigovernance
Paper arXiv:2408.15221 Empirical ▶ Audio

LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet

Multi-turn human jailbreaks achieve over 70% attack success rate against state-of-the-art LLM defenses that report single-digit rates against automated attacks, exposing a systematic gap in how safety is evaluated.

multi-turn-jailbreakred-teamingdefense-evaluationsafety-alignmentmachine-unlearning
Paper arXiv:2511.05936 Position ▶ Audio

10 Open Challenges Steering the Future of Vision-Language-Action Models

A position paper from AAAI 2026 identifies ten development milestones for VLA models in embodied AI, with safety named explicitly among the challenges and evaluation gaps highlighted as a systemic barrier to progress.

embodied-aivlasafetyevaluation-gapsrobotics
Paper arXiv:2604.12418 Empirical ▶ Audio

RACF: A Resilient Autonomous Car Framework with Object Distance Correction

Proposes RACF, a resilient autonomous vehicle framework that uses multi-sensor redundancy (depth camera, LiDAR, kinematics) with an Object Distance Correction Algorithm to detect and mitigate...

autonomous-vehicle-perceptionsensor-fusion-redundancyadversarial-robustnessdepth-estimation-correctionreal-time-safety-critical-systems
Blog

AI Safety Daily — April 16, 2026

Red-teaming as security theater, 0% physical AI puzzle performance, SafeAgentBench finds <10% hazard rejection, and AEGIS wrapper provides mathematical safety guarantees.

ai-safety-dailyred-teamingembodied-aivla-safetyfrontier-models
Paper arXiv:2604.08294 Empirical ▶ Audio

Can Vision Language Models Judge Action Quality? An Empirical Evaluation

Comprehensive evaluation of state-of-the-art Vision Language Models on Action Quality Assessment tasks, revealing systematic failure modes and biases that prevent reliable performance.

vision-language-modelsaction-quality-assessmentfine-grained-video-understandingmodel-bias-analysisembodied-task-evaluation
Paper arXiv:2410.13334 Empirical ▶ Audio

Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems

Intentional safety-induced biases in aligned LLMs create asymmetric jailbreak attack surfaces, with GPT-4o showing up to 20% success-rate disparities based solely on demographic keyword substitutions.

jailbreaksafety-alignmentbiasred-teamingadversarial-prompts
Paper arXiv:2510.17111 Survey ▶ Audio

Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey

A systematic survey of techniques for reducing latency, memory, and compute costs in VLA models, revealing how efficiency constraints directly shape the safety guarantees available to deployed robotic systems.

vla-modelsembodied-aiefficiencyedge-deploymentsafety-robustness
Blog

AI Safety Daily — April 15, 2026

Physical AI 2030 roadmap reveals four-phase maturity taxonomy, Gen2Real Gap warning persists, RAHS framework quantifies financial red-teaming outcomes, and UniDriveVLA unifies AV perception-action.

ai-safety-dailyembodied-aivlaphysical-aired-teaming
Paper arXiv:2410.00371 Empirical ▶ Audio

AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation

AHA is an open-source VLM that detects robotic manipulation failures and generates natural-language explanations, enabling safer recovery pipelines and denser reward signals.

failure-detectionrobotic-manipulationvision-language-modelsembodied-aifailure-modes
Paper arXiv:2604.07395 Empirical ▶ Audio

A Physical Agentic Loop for Language-Guided Grasping with Execution-State Monitoring

Introduces a physical agentic loop that wraps learned grasp primitives with execution monitoring and bounded recovery policies to handle failures in language-guided robotic manipulation.

robotic-graspingexecution-monitoringlanguage-guided-manipulationfailure-recoveryembodied-agents
Paper arXiv:2501.19180 Methods ▶ Audio

Enhancing Model Defense Against Jailbreaks with Proactive Safety Reasoning

Safety Chain-of-Thought (SCoT) teaches LLMs to reason about potential harms before generating a response, substantially improving robustness to jailbreak attacks including out-of-distribution prompts.

jailbreak-defensesafety-alignmentchain-of-thoughtllm-safetyadversarial-robustness
Blog

AI Safety Daily — April 14, 2026

AEGIS wrapper architecture for VLA safety, SafeAgentBench finds <10% hazard rejection, red-teaming critiqued as 'security theater', and OpenAI dissolves Mission Alignment team.

ai-safety-dailyembodied-aivlagovernancered-teaming
Paper arXiv:2604.08178 Empirical ▶ Audio

Aligning Agents via Planning: A Benchmark for Trajectory-Level Reward Modeling

Introduces Plan-RewardBench, a trajectory-level preference benchmark for evaluating reward models in tool-using agent scenarios, and benchmarks three RM families (generative, discriminative,...

reward-modelingtrajectory-level-preferencestool-use-agentsrlhf-benchmarkingagentic-alignment
Paper arXiv:2603.17305 Methods ▶ Audio

Contrastive Reasoning Alignment: Reinforcement Learning from Hidden Representations

CRAFT defends large reasoning models against jailbreaks by aligning safety directly in hidden state space via contrastive reinforcement learning, reducing attack success rates without degrading reasoning capability.

red-teamingreasoning-modelsalignmentreinforcement-learningcontrastive-learning
Paper arXiv:2511.16203 Empirical ▶ Audio

When Alignment Fails: Multimodal Adversarial Attacks on Vision-Language-Action Models

VLA-Fool exposes how textual, visual, and cross-modal adversarial attacks can systematically break the safety alignment of embodied VLA models, and proposes a semantic prompting framework as a first line of defense.

adversarial-attacksvla-modelsmultimodal-safetyembodied-airobustness-evaluation
Blog

AI Safety Daily — April 13, 2026

The Perception-Action Gap in embodied AI, PreSafe methodology for reasoning models, SafeAgentBench shows <10% hazard rejection, VLSA AEGIS safety layer, and OpenAI disbands Mission Alignment team.

ai-safety-dailyembodied-aivlaalignmentgovernance
Paper arXiv:2505.16640 Empirical ▶ Audio

BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization

BadVLA reveals that VLA models are vulnerable to a novel backdoor attack that decouples trigger learning from task objectives in feature space, enabling stealthy conditional control hijacking in robotic systems.

backdoor-attacksvla-modelsembodied-aiadversarial-robustnessrobot-safety
Paper arXiv:2603.17305 Empirical ▶ Audio

Contrastive Reasoning Alignment: Reinforcement Learning from Hidden Representations

CRAFT uses contrastive learning over a model's internal hidden states combined with reinforcement learning to produce reasoning LLMs that maintain safety alignment without sacrificing reasoning capability.

safety-alignmentreasoning-modelscontrastive-learningreinforcement-learningjailbreak-defense
Blog

AI Safety Daily — April 12, 2026

Daily AI safety research digest: jailbreaks, embodied AI risks, frontier model evaluations, and alignment research from April 12, 2026.

ai-safety-dailyjailbreakembodied-aialignmentfrontier-models
Paper arXiv:2604.07754 Empirical ▶ Audio

The Art of (Mis)alignment: How Fine-Tuning Methods Effectively Misalign and Realign LLMs in Post-Training

An empirical study showing that misaligning an LLM via fine-tuning is significantly cheaper than realigning it, with asymmetric attack-defense dynamics that have serious implications for deployed safety.

safety-alignmentfine-tuningllm-safetymisalignmentpost-training
Paper arXiv:2511.16203 Empirical ▶ Audio

When Alignment Fails: Multimodal Adversarial Attacks on Vision-Language-Action Models

VLA-Fool reveals that embodied VLA models are systematically vulnerable to textual, visual, and cross-modal adversarial attacks, and proposes a semantic prompting defense that only partially closes the gap.

adversarial-attacksvision-language-actionmultimodal-robustnessembodied-aisafety-evaluation
Blog

A Meta-Jailbreak, a Slide-Deck Content Filter, and a CLI That Lied to Us

What NotebookLM does when you feed it a corpus of jailbreak research papers, the reproducible content-sensitive filter hiding in its slide-deck Studio command, and the quiet CLI default that silently contaminated three of our experimental runs into one conversation.

notebooklmmeta-jailbreakmethodologycontent-gategrading
Paper arXiv:2604.04664 Methods ▶ Audio ▶ Video

ROSClaw: A Hierarchical Semantic-Physical Framework for Heterogeneous Multi-Agent Collaboration

ROSClaw proposes a hierarchical framework integrating vision-language models with heterogeneous robots through unified semantic-physical control, enabling closed-loop policy learning and...

vision-language-action-integrationmulti-agent-robot-coordinationsim-to-real-transferembodied-llm-groundinghierarchical-task-planning
Paper arXiv:2504.07887 Empirical ▶ Audio

Benchmarking Adversarial Robustness to Bias Elicitation in Large Language Models: Scalable Automated Assessment with LLM-as-a-Judge

CLEAR-Bias introduces a scalable framework that combines jailbreak techniques with LLM-as-a-Judge scoring to reveal how adversarial prompting exploits sociocultural biases embedded in state-of-the-art language models.

adversarial-biassafety-alignmentjailbreak-attacksllm-as-a-judgesafety-benchmarking
Paper arXiv:2512.07059 Empirical ▶ Audio

Replicating TEMPEST at Scale: Multi-Turn Adversarial Attacks Against Trillion-Parameter Frontier Models

A large-scale replication finds that six of ten frontier LLMs achieve 96–100% attack success rates under multi-turn adversarial pressure, while deliberative inference cuts that rate by more than half without any retraining.

multi-turn-jailbreakadversarial-attacksfrontier-modelssafety-alignmentred-teaming
Report

Meta-Jailbreak in NotebookLM, a Slide-Deck Content Filter, and a Methodology Lesson

Three preliminary findings from a day of NotebookLM red-teaming: NotebookLM produces partial adversarial attack synthesis from a corpus of jailbreak research papers (5/5 fresh-session runs); its slide-deck Studio command has a reproducible content-sensitive pre-generation filter with an uncharacterized discriminator axis; and a CLI quirk silently contaminated three experimental runs into one multi-turn thread until it was caught and documented.

meta-jailbreaknotebooklmgrading-methodologyslide-deck-filtercontent-gate
Blog

AI Safety Daily — April 10, 2026

Descriptive fluency vs physical grounding, the Perception-Action Gap in world models, and why safety must be an architectural constraint.

ai-safety-dailyembodied-aiworld-modelsphysical-aisafety-architecture
Paper arXiv:2604.05595 Empirical ▶ Audio ▶ Video

Uncovering Linguistic Fragility in Vision-Language-Action Models via Diversity-Aware Red Teaming

Proposes DAERT, a diversity-aware red teaming framework using reinforcement learning to systematically uncover linguistic vulnerabilities in Vision-Language-Action models through adversarial...

vision-language-action-modelsadversarial-red-teaminglinguistic-robustnessembodied-ai-safetydiversity-aware-attacks
Paper arXiv:2404.00540 Empirical ▶ Audio

Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches

EAD turns an embodied agent's ability to move into a defensive weapon, using recurrent perception and active viewpoint control to defeat adversarial patches in 3D environments.

adversarial-patchesembodied-aiactive-defenserecurrent-networksphysical-adversarial-attacks
Paper arXiv:2501.18492 Empirical ▶ Audio

GuardReasoner: Towards Reasoning-based LLM Safeguards

GuardReasoner trains safety guardrails to produce explicit reasoning chains before verdicts, outperforming GPT-4o+CoT and LLaMA Guard on safety benchmarks while improving generalization to novel adversarial inputs.

llm-safetyguardrailsreasoningsafety-alignmentred-teaming
Blog

AI Safety Daily — April 9, 2026

Red-teaming exposed as security theater, FLIP backward inference outperforms LLM-as-judge by 79.6%, and the corporate safety leadership exodus continues.

ai-safety-dailyred-teamingevaluationcorporate-governancealignment
Blog

AI Safety Daily — April 8, 2026

Federal AV regulation push, AEGIS safety wrapper achieves +59% obstacle avoidance, PreSafe eliminates alignment tax, and SafeAgentBench reveals 90% hazard compliance rate.

ai-safety-dailyautonomous-vehiclesvla-safetyembodied-airegulation
Paper arXiv:2603.28301 Empirical ▶ Audio

LIBERO-Para: A Diagnostic Benchmark and Metrics for Paraphrase Robustness in VLA Models

A controlled benchmark revealing that paraphrasing task instructions causes 22–52 percentage point performance drops in state-of-the-art VLA models, with most failures traced to object-level lexical sensitivity rather than execution errors.

vla-robustnessparaphrase-attacksrobotic-manipulationlinguistic-generalizationembodied-ai
Paper arXiv:2604.04759 Empirical ▶ Audio

Your Agent, Their Asset: A Real-World Safety Analysis of OpenClaw

The first real-world safety evaluation of a deployed personal AI agent shows that poisoning any single dimension of an agent's persistent state raises attack success rates from a 24.6% baseline to 64–74%, with no existing defense eliminating the vulnerability.

agent-safetypersistent-state-poisoningprompt-injectionred-teamingpersonal-ai-agents
Blog

AI Safety Daily: Red-Teaming Is Security Theater, AEGIS Wraps VLAs in Math, AI-SS 2026 Opens

Daily AI safety digest — CMU research exposes red-teaming as inconsistent theater, AEGIS provides mathematical safety guarantees for embodied AI, and the first international AI Safety and Security workshop opens at EDCC.

ai-safetydaily-digestred-teamingvla-safetyembodied-ai
Blog

Gemma 4 Safety Improves — But Only Against Certain Attacks

342 traces across 10 attack types reveal Google's Gemma 4 has genuine safety improvements on structured escalation (-58pp DeepInception, -40pp Crescendo) but zero improvement on standard jailbreaks and VLA action-layer requests (88% ASR).

gemmainter-generationalsafety-scalingbenchmarkingdefense
Paper arXiv:2604.01194 Methods ▶ Audio

AgentWatcher: A Rule-based Prompt Injection Monitor

A scalable and explainable prompt injection detection system that uses causal attribution to identify influential context segments and explicit rule evaluation to flag injections in LLM-based agents.

prompt-injectionllm-securitycausal-attributionrule-based-detectionagent-safety
Paper arXiv:2511.12149 Empirical ▶ Audio

AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models

A unified evaluation framework exposing critical adversarial and backdoor vulnerabilities in VLA models, introducing BackdoorVLA — a targeted attack achieving 58.4% average success at hijacking multi-step robotic action sequences.

vla-modelsadversarial-attacksbackdoor-attacksembodied-airobotics-safety
Paper arXiv:2504.13203 Empirical ▶ Audio

X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents

A collaborative multi-agent red-teaming framework that achieves up to 98.1% jailbreak success across leading LLMs via adaptive multi-turn escalation, exposing the inadequacy of single-turn safety alignment under sustained conversational pressure.

jailbreakred-teamingmulti-turnsafety-alignmentllm-safety
Report

Gemma Family Safety Scaling: Does Safety Improve With Model Size and Generation?

Comprehensive intra-family safety analysis of 4 Gemma models across 13 attack types. Inter-generational improvement is real but attack-type-specific.

gemmainter-generationalsafety-scalingmulti-attackformat-lock
Report

Claude Mythos Preview System Card — Analysis for Failure-First Research

Analysis of Anthropic's 163-page system card for their withheld frontier model. Validates DETECTED_PROCEEDS, reasoning trace unreliability, evaluation awareness, and iatrogenic safety.

Blog

AI Safety Daily: OpenAI Dismantles Safety Team, Tesla FSD Recall Track, 698 Rogue Agents

Daily AI safety digest — OpenAI dissolves Mission Alignment team, NHTSA escalates Tesla FSD probe to 3.2M vehicle recall track, 698 AI agents went rogue in five months, and GPT-5.2 collapses to 9.1% on physical reasoning.

ai-safetydaily-digestopenaiteslaautonomous-vehicles
Paper arXiv:2603.24414 Empirical ▶ Audio

ClawKeeper: Comprehensive Safety Protection for OpenClaw Agents Through Skills, Plugins, and Watchers

A three-layer runtime security framework for autonomous agents that prevents privilege escalation, data leakage, and malicious skill execution through context-injected policies, behavioral monitoring, and a decoupled watcher middleware.

agent-safetyautonomous-agentsprivilege-escalationruntime-securityprompt-injection
Paper arXiv:2501.18837 Empirical ▶ Audio

Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming

Anthropic's Constitutional Classifiers use LLM-generated synthetic data and natural language rules to create jailbreak-resistant safeguards that survived over 3,000 hours of professional red teaming without a universal bypass being found.

jailbreak-defenseconstitutional-aired-teamingsafety-alignmentclassifiers
Paper arXiv:2411.13587 Empirical ▶ Audio

Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics

A systematic study revealing how adversarial patches and targeted perturbations can cause VLA-based robots to fail catastrophically, with task success rates dropping up to 100%.

vla-safetyadversarial-attacksroboticsadversarial-patchesembodied-ai
Blog

AI Safety Daily: Security Theater, Decision-Before-Reasoning, and the VLA Safety Gap

Daily AI safety digest — CMU exposes red-teaming theater, PreSafe gates safety before reasoning, AEGIS brings mathematical guarantees to robot safety, and agents reject fewer than 10% of dangerous requests.

ai-safetydaily-digestred-teamingembodied-aivla-safety
Paper arXiv:2509.03383 Empirical ▶ Audio

ANNIE: Be Careful of Your Robots — Adversarial Safety Attacks on Embodied AI

A systematic study of adversarial safety attacks on VLA-powered robots using ISO-grounded safety taxonomies, achieving over 50% attack success rates across all safety categories.

embodied-aiadversarial-attacksvla-modelsrobot-safetyred-teaming
Paper arXiv:2603.21697 Empirical ▶ Audio

Structured Visual Narratives Undermine Safety Alignment in Multimodal Large Language Models

Comic-based jailbreaks using structured visual narratives achieve success rates above 90% on commercial multimodal models, exposing fundamental limits of text-centric safety alignment.

jailbreakmultimodal-safetyvisual-narrativessafety-alignmentcomic-attacks
Report

Task Framing as a Jailbreak Vector — Controlled Experiment Results

Task Framing as a Jailbreak Vector — Controlled Experiment Results

task-framingformat-lockcontrolled-experimenttranscription-loophole
Report

Visual Jailbreaks Evolved Stage 2 — 12-Model Benchmark Analysis

Visual Jailbreaks Evolved Stage 2 — 12-Model Benchmark Analysis

visual-jailbreaktranscription-loopholecross-modelbenchmark
Paper arXiv:2603.24329 Empirical ▶ Audio

GameplayQA: A Benchmarking Framework for Decision-Dense POV-Synced Multi-Video Understanding of 3D Virtual Agents

Introduces GameplayQA, a densely annotated benchmark for evaluating multimodal LLMs on first-person multi-agent perception and reasoning in 3D gameplay videos, with diagnostic QA pairs and structured...

multimodal-llm-evaluationembodied-ai-perceptionmulti-agent-video-understandingtemporal-groundingagent-attribution
Blog

Everything Hidden: ST3GG and the Steganographic Attack Surface for AI Systems

We ran ST3GG — an all-in-one steganography suite — through its paces as an AI safety research tool. The findings include a partial detection gap in the ALLSIGHT engine for Unicode steganography, model-specific filename injection templates targeting GPT-4V, Claude, and Gemini separately, and network covert channels that matter for agentic AI. Here is what we found.

researchsafetyred-teamingsteganographymultimodal
Paper arXiv:2603.25103 Methods ▶ Audio

Layer-Specific Lipschitz Modulation for Fault-Tolerant Multimodal Representation Learning

Proposes a layer-specific Lipschitz modulation framework for fault-tolerant multimodal representation learning that detects and corrects sensor failures through self-supervised pretraining and...

fault-tolerancemultimodal-learninglipschitz-constraintsanomaly-detectionsensor-robustness
Paper arXiv:2603.23983 Empirical ▶ Audio

SafeFlow: Real-Time Text-Driven Humanoid Whole-Body Control via Physics-Guided Rectified Flow and Selective Safety Gating

SafeFlow combines physics-guided rectified flow matching with a 3-stage safety gate to enable real-time text-driven humanoid control that avoids physical hallucinations and unsafe trajectories on...

text-driven-motion-generationphysics-aware-trajectory-optimizationsafety-gating-mechanismshumanoid-robot-controlout-of-distribution-detection
Report

L3/L8 Evolved Attack Variants — Adversarial Refinement of Visual Jailbreak Patterns

L3/L8 Evolved Attack Variants — Adversarial Refinement of Visual Jailbreak Patterns

visual-jailbreaktranscription-loopholeattack-evolutiongap-completion
Report

Specification Hijacking — A Three-Way Compound Attack Pattern

Specification Hijacking — A Three-Way Compound Attack Pattern

compound-attackspecification-hijackingformat-lockauthority-gradient
Report

DETECTED_PROCEEDS Anatomy and Evolved Compliance Cascade Attack Variants

DETECTED_PROCEEDS Anatomy and Evolved Compliance Cascade Attack Variants

detected-proceedscompliance-cascadesafety-reasoningattack-evolution

March 2026

Paper arXiv:2506.16402 Empirical ▶ Audio

IS-Bench: Evaluating Interactive Safety of VLM-Driven Embodied Agents in Daily Household Tasks

Introduces a process-oriented benchmark with 161 scenarios and 388 safety risks for evaluating whether VLM-driven embodied agents recognize and mitigate dynamic hazards during household task execution — finding that current frontier models lack interactive safety awareness.

embodied-ai-benchmarkinteractive-safetyhousehold-roboticsprocess-oriented-evaluationvlm-safety
Blog

Eight Layers of Visual Jailbreaks: Why ASCII Art Is Patched But the Transcription Loophole Isn't

We mapped the visual jailbreak attack surface into 8 distinct layers and tested them against 4 models. ASCII art encoding is largely blocked, but attacks that frame harmful generation as content transcription succeed 62-75% of the time.

jailbreaksvisual-attacksascii-artsteganographysafety
Blog

Eight Layers of Visual Jailbreaks: Why ASCII Art Is Patched But Framing Attacks Aren't

We mapped the visual jailbreak attack surface into 8 distinct layers and tested them against 4 models. ASCII art encoding is largely blocked, but framing attacks that recontextualise the model's task succeed at significantly higher rates.

jailbreaksvisual-attacksascii-artsteganographysafety
Paper arXiv:2603.25727 Empirical ▶ Audio

Back to Basics: Revisiting ASR in the Age of Voice Agents

Introduces WildASR, a multilingual diagnostic benchmark that systematically evaluates ASR robustness across environmental degradation, demographic shift, and linguistic diversity using real human...

asr-robustnessmultilingual-evaluationreal-world-degradationhallucination-safetydiagnostic-benchmarking
Report

Visual Jailbreak Meta-Analysis — 8-Layer Attack Surface Taxonomy

Visual Jailbreak Meta-Analysis — 8-Layer Attack Surface Taxonomy

visual-jailbreaktaxonomyattack-surfaceartprompttranscription-loophole
Report

The Task Framing Effect — Why Models Lower Safety Guards for Non-Generative Tasks

The Task Framing Effect — Why Models Lower Safety Guards for Non-Generative Tasks

task-framingformat-locktranscription-loopholedetected-proceedssystem-t-system-s
Report

Ethics Review — Visual Jailbreak 8-Layer Taxonomy and the Transcription Loophole

Ethics Review — Visual Jailbreak 8-Layer Taxonomy and the Transcription Loophole

ethicsdual-usecoordinated-disclosured-scorevisual-jailbreak
Paper arXiv:2603.25044 Application ▶ Audio

ThermoAct:Thermal-Aware Vision-Language-Action Models for Robotic Perception and Decision-Making

Integrates thermal sensor data into Vision-Language-Action models to enhance robot perception, safety, and task execution in human-robot collaboration scenarios.

thermal-sensing-roboticsvision-language-action-modelsmultimodal-robot-perceptionhuman-robot-collaborationembodied-ai-safety
Report

Format-Lock Attacks Against Reasoning and Deliberative Alignment Models

Format-Lock Attacks Against Reasoning and Deliberative Alignment Models

format-lockreasoning-modelsdeliberative-alignmentcapability-floor
Blog

149 Jailbreaks, One Corpus: What Pliny's Prompt Library Reveals About AI Safety

We extracted every jailbreak prompt from Pliny the Prompter's public repositories and tested them against models from 9B to 744B parameters. The results challenge assumptions about model safety at scale.

researchjailbreakcorpusred-teamingsafety
Blog

When Your Defense Is on the Wrong Floor: Why System-Prompt Safety Fails Against Persona Hijacking

The same defense that reduces standard jailbreak success by 30 percentage points has zero effect against persona hijacking attacks. Both defense and attack operate at the system prompt level — and later instructions win.

researchsafetydefensejailbreakpersona-hijacking
Blog

Same Defense, Opposite Result: Why AI Safety Depends on Which Model You're Protecting

We tested the same system-prompt defense against the same jailbreak prompts on two different models. One saw a 50 percentage point reduction in attack success. The other saw zero change. The difference comes down to which part of the system prompt the model pays attention to first.

researchsafetydefensepositional-biasarchitecture
Blog

Five Things We Learned Testing AI Safety in March 2026

In a single research sprint, we tested 10 models with persona-hijacking jailbreaks, measured defense effectiveness, documented how models detect attacks and comply anyway, and found that some safety measures make things worse. Here is what the data says.

researchsynthesisjailbreakdefenseiatrogenesis
Blog

The Temperature Dial: When API Parameters Become Attack Vectors

We discovered that changing a single API parameter — temperature — can degrade AI safety filters by 30 percentage points. No prompt engineering required. The attack surface is invisible to content filters.

researchsafetysamplingnovel-attackapi-security
Blog

The 67% Wall: Why Every AI Model Falls to the Same Jailbreak Rate

We tested 149 jailbreak prompts from Pliny's public repositories against 7 models from 30B to 671B parameters. Five of them converge at exactly 66.7% broad ASR under FLIP grading. The models differ in how deeply they comply, but not in whether they comply.

researchjailbreakcorpusconvergencesafety
Paper arXiv:2603.25063 Methods ▶ Audio

TopoPilot: Reliable Conversational Workflow Automation for Topological Data Analysis and Visualization

TopoPilot introduces a two-agent agentic framework with systematic guardrails and verification mechanisms to reliably automate complex scientific visualization workflows, particularly for topological data analysis.

agentic-systemsllm-reliabilityverification-mechanismsscientific-visualizationfailure-mode-taxonomy
Report

Defense Effectiveness Is Model-Dependent — Positional Bias in System Prompt Processing

Defense Effectiveness Is Model-Dependent — Positional Bias in System Prompt Processing

defensepositional-biasiatrogenicsystem-promptl1b3rt4s
Report

Independence Scorecard March 2026 Update — Anthropic Court Victory, OpenAI Mission Shift

Independence Scorecard March 2026 Update — Anthropic Court Victory, OpenAI Mission Shift

independencegovernanceanthropicopenaiscorecard
Report

Paired Format-Lock and L1B3RT4S Test — Vulnerability Profiles Diverge But Not Consistently

Paired Format-Lock and L1B3RT4S Test — Vulnerability Profiles Diverge But Not Consistently

format-lockl1b3rt4sorthogonalitycross-attacksafety-architecture
Report

The Ethics of DETECTED_PROCEEDS -- When Models Know and Comply Anyway

DETECTED_PROCEEDS (DP) is a systematic failure mode in which a language model explicitly identifies a prompt as an adversarial attack in its reasoning process, then generates compliant output...

Report

VLA Family Coverage Gap Assessment and Testing Readiness Review

VLA Family Coverage Gap Assessment and Testing Readiness Review

vlacoverage-assessmentembodied-aibenchmark-planning
Report

Defense Benchmark Data Consolidation for CCS Paper

Defense Benchmark Data Consolidation for CCS Paper

defenseconsolidationpositional-biasiatrogenic
Report

Grading Infrastructure Audit — Coverage, Agreement, and Calibration Assessment

Grading Infrastructure Audit — Coverage, Agreement, and Calibration Assessment

gradinginfrastructurecalibrationflipmethodology
Paper ▶ Audio ▶ Video

G0DM0D3: A Modular Framework for Evaluating LLM Robustness Through Adaptive Sampling and Input Perturbation

An open-source framework that systematises inference-time safety evaluation into five composable modules — AutoTune (sampling parameter manipulation), Parseltongue (input perturbation), STM (output normalization), ULTRAPLINIAN (multi-model racing), and L1B3RT4S (model-specific jailbreak prompts). We analyse its implications for adversarial AI safety research.

daily-paperjailbreakred-teamingsafety-evaluationinference-time
Report

Autonomous AI Research Agents — Failure-First Analysis of Karpathy's autoresearch

Autonomous AI Research Agents — Failure-First Analysis of Karpathy's autoresearch

autonomous-agentsautoresearchfailure-analysisagentic-risk
Report

G0DM0D3 Framework Analysis — Assimilation Brief for Jailbreak Corpus

G0DM0D3 Framework Analysis — Assimilation Brief for Jailbreak Corpus

g0dm0d3framework-analysisl1b3rt4sparseltonguetaxonomy
Report

Technique-Level ASR Analysis Across Full Corpus

Technique-Level ASR Analysis Across Full Corpus

technique-analysiscorpus-wideasrtaxonomy
Report

Iatrogenic Safety Empirical Pilot — First Quantitative Evidence of Defense-Induced Harm Increase

Iatrogenic Safety Empirical Pilot — First Quantitative Evidence of Defense-Induced Harm Increase

iatrogenicdefensesafety-interventionempirical
Report

L1B3RT4S Cross-Scale Effectiveness Analysis

L1B3RT4S Cross-Scale Effectiveness Analysis

l1b3rt4scross-scaleparseltongueg0dm0d3
Report

L1B3RT4S Full Corpus Cross-Model Analysis

L1B3RT4S Full Corpus Cross-Model Analysis

l1b3rt4scorpus-analysiscross-modelflip-grading
Report

Defense Privilege Hierarchy — Why System-Prompt Defenses Fail Against System-Prompt Attacks

Defense Privilege Hierarchy — Why System-Prompt Defenses Fail Against System-Prompt Attacks

defenseprivilege-hierarchysystem-promptl1b3rt4s
Report

Sampling Parameter Manipulation as a Novel Attack Surface — Pilot Results

Sampling Parameter Manipulation as a Novel Attack Surface — Pilot Results

sampling-parametersnovel-attacktemperaturepilot
Report

Sprint 16 Findings Synthesis — L1B3RT4S, Sampling Parameter Manipulation, and Defense Hierarchy

Sprint 16 Findings Synthesis — L1B3RT4S, Sampling Parameter Manipulation, and Defense Hierarchy

synthesisl1b3rt4ssampling-parametersdefense
Report

L1B3RT4S Corpus — 10-Model Cross-Scale Synthesis

L1B3RT4S Corpus — 10-Model Cross-Scale Synthesis

l1b3rt4scross-scaleconvergenceflip-grading
Report

The Ethics of Assimilating Public Jailbreak Frameworks -- G0DM0D3, L1B3RT4S, and the Dual-Use Telescope

Sprint 16 assimilated the G0DM0D3 jailbreak framework: an AGPL-3.0-licensed, publicly available tool created by Pliny the Prompter (elder-plinius) that packages jailbreak techniques into modular...

Report

Cross-Attack Family Synthesis — Format-Lock vs L1B3RT4S Vulnerability Profiles Diverge

Cross-Attack Family Synthesis — Format-Lock vs L1B3RT4S Vulnerability Profiles Diverge

cross-attackformat-lockl1b3rt4sdetected-proceedsorthogonality
Report

L1B3RT4S VLA Adaptation and DETECTED_PROCEEDS Scaling Analysis

L1B3RT4S VLA Adaptation and DETECTED_PROCEEDS Scaling Analysis

l1b3rt4svladetected-proceedsscalingembodied-ai
Paper arXiv:2506.00781 Methods ▶ Audio

CoP: Agentic Red-teaming for LLMs using Composition of Principles

An extensible agentic framework that composes human-provided red-teaming principles to generate jailbreak attacks, achieving up to 19x improvement over single-turn baselines.

red-teamingjailbreakagentic-attacksattack-compositionllm-safety
Blog

Adversarial Robustness Assessment Services

Failure-First offers tiered adversarial robustness assessments for AI systems using the FLIP methodology. Three engagement tiers from rapid automated scans to comprehensive red-team campaigns. We test against models up to 1.1 trillion parameters, grounded in 201 models tested and 133,000+ empirical results.

servicesred-teamingadversarial-testingflipembodied-ai
Blog

CARTO Beta: First 10 Testers Wanted

We are opening the CARTO certification to 10 beta testers at a founding rate of $100. Six modules, 20+ hours of curriculum, built on 201 models and 133,000+ results. Help us shape the first AI red-team credential.

cartocertificationred-teamingai-safetytraining
Blog

CARTO: The First AI Red Team Certification

There is no credential for AI red-teaming. CARTO changes that. Six modules, 20+ hours of content, built on 201 models and 133,000+ evaluation results. Coming Q3 2026.

cartocertificationred-teamingai-safetytraining
Blog

Compliance Cascade: A New Class of AI Jailbreak

We discovered an attack that weaponises a model's own safety reasoning. By asking it to analyse harm and explain how it would refuse, the model treats its safety performance as sufficient — and then complies. 100% success rate on two production models.

researchjailbreaksafetycompliance-cascadedetected-proceeds
Blog

The Epistemic Crisis: Can We Trust AI Safety Benchmarks?

We tested 7 LLM graders on unambiguous safety cases. Six passed. One hallucinated evidence for its verdict. But the real problem is worse: on the ambiguous cases that actually determine published ASR numbers, inter-grader agreement drops to kappa=0.320.

researchevaluationbenchmarksgradersepistemic-crisis
Blog

The Ethics of Emotional AI Manipulation: When Empathy Becomes an Attack Vector

AI systems trained to be empathetic can be exploited through the same emotional pathways that make them helpful. This creates an ethical challenge distinct from technical jailbreaks.

ethicsemotional-manipulationaffective-attacksiatrogenic-safetyembodied-ai
Blog

F1-STD-001: A Voluntary Standard for AI Safety Evaluation

We have published a draft voluntary standard for evaluating embodied AI safety. It covers 36 attack families, grader calibration requirements, defense benchmarking, and incident reporting. Here is what it says, why it matters, and how to use it.

standardspolicyembodied-aisafetyeu-ai-act
Blog

First Results from Ollama Cloud Testing

We tested models up to 397 billion parameters through Ollama Cloud integration. The headline finding: safety training methodology matters more than parameter count. A 230B model scored 78.6% ASR while a 397B model dropped to 7.1%.

researchollamabenchmarksmodel-comparisonsafety-training
Blog

Format-Lock: The Universal AI Jailbreak

One attack family achieves 97.5-100% success rates on every model we have tested, from 4B to 1.1 trillion parameters. Even the safest model in our corpus -- which resists every other attack -- falls to format-lock. Here is what deployers need to know.

researchformat-lockjailbreakadversarial-testingai-safety
Blog

Frontier Model Safety: Why 1.1 Trillion Parameters Does Not Mean Safe

We tested models up to 1.1 trillion parameters for adversarial safety. The result: safety varies 3.9x across frontier models, and parameter count is not predictive of safety robustness. Mistral Large 3 (675B) shows 70% broad ASR while Qwen3.5 (397B) shows 18%. What enterprises need to know before choosing an AI provider.

frontier-modelssafetyparameter-countscalingenterprise
Blog

Three Providers, Three Architectures, Three Orders of Magnitude: Reasoning-Level DETECTED_PROCEEDS Is Not an Edge Case

We have now confirmed Reasoning-Level DETECTED_PROCEEDS across 3 providers (Liquid AI, DeepSeek, Moonshot AI), 3 architectures, and model sizes spanning 1.2B to 1.1 trillion parameters. Models plan harmful content in their thinking traces — fake news, cyber attacks, weapons manufacturing — and deliver nothing to users. The question is whether your deployment exposes those traces.

detected-proceedsreasoning-modelssafetyauditingdeployment-architecture
Blog

Our Research Papers

Three papers from the Failure-First adversarial AI safety research programme are being prepared for arXiv submission. Abstracts and details below. Preprints uploading soon.

papersresearcharxivpreprintssafety
Blog

Safety as a Paid Feature: How Free-Tier AI Models Are Less Safe Than Their Paid Counterparts

Matched-prompt analysis across 207 models reveals that some free-tier AI endpoints comply with harmful requests that paid tiers refuse. DeepSeek R1 shows a statistically significant 50-percentage-point safety gap (p=0.004). Safety may be becoming a premium product feature.

free-tiersafety-degradationaccess-equityAI-safetyOpenRouter
Blog

Introducing Structured Safety Assessments for Embodied AI

Three tiers of adversarial safety assessment for AI-directed robotic systems, grounded in the largest open adversarial evaluation corpus. From quick-scan vulnerability checks to ongoing monitoring, each tier maps to specific regulatory and commercial needs.

servicessafety-assessmentembodied-aiEU-AI-Actregulation
Blog

Safety Awareness Does Not Equal Safety: The 88.9% Problem

We validated with LLM grading that 88.9% of AI reasoning traces that genuinely detect a safety concern still proceed to generate harmful output. Awareness is not a defence mechanism.

researchDETECTED_PROCEEDSreasoningsafetyembodied-ai
Blog

The State of AI Safety: Q1 2026

A data-grounded assessment of the AI safety landscape at the end of Q1 2026, drawing on 212 models, 134,000+ evaluation results, and the first Governance Lag Index dataset.

ai-safetyquarterly-reviewgovernanceembodied-aithreat-landscape
Blog

Temporal Drift: The Boiling Frog Attack on AI Safety

Temporal Drift Attacks exploit a fundamental gap in how AI systems evaluate safety -- each step looks safe in isolation, but the cumulative trajectory crosses lethal thresholds. This is the boiling frog problem for embodied AI.

researchTDAtemporal-driftembodied-aiattack-families
Blog

Threat Horizon Digest: March 2026

Monthly threat intelligence summary for embodied AI safety. This edition: humanoid mass production outpaces safety standards, MCP tool poisoning emerges as critical agent infrastructure risk, and the EU AI Act's August deadline approaches with no adversarial testing methodology.

threat-intelligencegovernanceregulationhumanoid-robotsMCP
Blog

Threat Horizon Q2 2026: Agents Go Rogue, Robots Go Offline, Regulators Go Slow

Three converging trends define the Q2 2026 threat landscape: autonomous AI agents causing real-world harm, reasoning models as jailbreak weapons, and VLA robots deploying without safety standards. Regulation is 12-24 months behind.

threat-landscapegovernance-lagvlaautonomous-agentsregulation
Blog

When Defenses Backfire: Five Ways AI Safety Measures Create the Harms They Prevent

The iatrogenic safety paradox is not a theoretical concern. Our 207-model corpus documents five distinct mechanisms by which safety interventions produce new vulnerabilities, false confidence, and novel attack surfaces. The AI safety field needs the same empirical discipline that governs medicine.

iatrogenesisdefense-paradoxsafety-evaluationembodied-aipolypharmacy
Blog

Zero of 36: No AI Attack Family Is Fully Regulated Anywhere in the World

We mapped all 36 documented attack families for embodied AI against every major regulatory framework on Earth. The result: not a single attack family is fully covered. 33 have no specific coverage at all. The regulatory gap is not a crack -- it is the entire floor.

regulationgovernance-lagembodied-aiEU-AI-Actpolicy
Paper arXiv:2510.09269 Empirical ▶ Audio

GoBA: Goal-oriented Backdoor Attack against VLA via Physical Objects

Demonstrates that physical objects embedded in training data can serve as backdoor triggers directing VLA models to execute attacker-chosen goal behaviors with 97% success.

backdoor-attackvision-language-actionphysical-triggertraining-data-poisoningrobot-safety
Report

Corpus-Level Statistical Meta-Analysis

Corpus-Level Statistical Meta-Analysis

meta-analysisstatisticsvariance-decompositionprovider-effects
Report

FLIP Grader Calibration Analysis

FLIP Grader Calibration Analysis

grader-calibrationFLIPinter-raterreliability
Report

Statistical Power Analysis for Key Comparisons

Statistical Power Analysis for Key Comparisons

statistical-powersample-sizemethodology
Report

Haiku Re-Grading Campaign -- Ollama Cloud Traces

Haiku Re-Grading Campaign -- Ollama Cloud Traces

re-gradinghaikuFLIPverdict-correction
Report

Session Attack Synthesis -- Sprint 13 Cross-Agent Results

Session Attack Synthesis -- Sprint 13 Cross-Agent Results

synthesiscross-agentattack-resultssprint-summary
Report

Epistemic Crisis Grader Calibration Evaluation

Epistemic Crisis Grader Calibration Evaluation

grader-calibrationepistemic-crisisevaluationobvious-cases
Report

Grader Confusion Matrix and Inter-Grader Agreement

Grader Confusion Matrix and Inter-Grader Agreement

grader-agreementconfusion-matrixinter-raterreliability
Report

Evaluation Governance -- The Missing Layer in AI Safety Regulation

Evaluation Governance -- The Missing Layer in AI Safety Regulation

evaluation-governanceregulationpolicygrading-standards
Report

Compliance Cascade Attack -- Frontier Scaling and Co-Evolution

Compliance Cascade Attack -- Frontier Scaling and Co-Evolution

CCAfrontier-modelsco-evolutiondefense
Report

Novel Attack Family Expansion -- CCA v0.2, RSE, and Grader Evasion

Novel Attack Family Expansion -- CCA v0.2, RSE, and Grader Evasion

novel-attacksCCARSEgrader-evasionexpansion
Report

The Compliance Cascade -- A Dual-Use Ethics Analysis

The Compliance Cascade -- A Dual-Use Ethics Analysis

CCAdual-useethicsresponsible-disclosure
Report

Wave 7 Validation Results

Wave 7 Validation Results

validationgradingCCAambiguous-calibration
Report

Sprint 13-14 Session Summary

Sprint 13-14 Session Summary

sprint-summarysessionmulti-agentcoordination
Report

CCA + GE Expansion -- New Models and Defense Mutations

CCA + GE Expansion -- New Models and Defense Mutations

CCAgrader-evasiondefense-mutationsexpansion
Report

Haiku Re-Grading of Sprint 13 Corpus

Haiku Re-Grading of Sprint 13 Corpus

re-gradinghaikunemotron-biasverdict-correction
Report

Cross-Model x Attack-Family ASR Heatmap

Cross-Model x Attack-Family ASR Heatmap

heatmapcross-modelattack-familyASR-matrix
Report

Ambiguous Calibration Results -- 6-Grader Inter-Rater Agreement

Ambiguous Calibration Results -- 6-Grader Inter-Rater Agreement

ambiguous-calibrationinter-ratergrader-agreementDETECTED-PROCEEDS
Report

FLIM Level 5 -- Systemic Safety Theater

FLIM Level 5 -- Systemic Safety Theater

FLIMsafety-theatersystemiciatrogenicLevel-5
Report

Session Statistical Summary -- Sprint 13-15

Session Statistical Summary -- Sprint 13-15

statisticssprint-summarygrader-reliabilitypower-analysis
Report

Grader Evasion vs FLIP Vulnerability and Authority Gradient Attack

Grader Evasion vs FLIP Vulnerability and Authority Gradient Attack

grader-evasionFLIP-vulnerabilityauthority-gradientnovel-attack
Report

Session Lessons Learned (Sprint 13-15)

Session Lessons Learned (Sprint 13-15)

lessons-learnedmethodologysprint-retrospective
Report

Frontier Model Safety Landscape -- Safety Training > Parameter Count

Frontier Model Safety Landscape -- Safety Training > Parameter Count

frontier-landscapesafety-trainingparameter-countDETECTED-PROCEEDS
Report

Kimi K2.5 Frontier Analysis -- 1.1TB MoE Safety Boundary

Kimi K2.5 Frontier Analysis -- 1.1TB MoE Safety Boundary

kimi-k2.5frontier1.1TBsafety-scaling
Report

Frontier Model Safety Scorecards

Frontier Model Safety Scorecards

scorecardsfrontier-modelsgradingsafety-profiles
Report

Systematic Audit of Reasoning-Level DETECTED_PROCEEDS

Systematic Audit of Reasoning-Level DETECTED_PROCEEDS

DETECTED-PROCEEDSreasoning-tracessystematic-auditsafety-override
Report

Corpus Expansion -- Ollama Cloud Trace Import

Corpus Expansion -- Ollama Cloud Trace Import

corpus-expansionollama-clouddatabase-import
Report

Format-Lock Midrange Experiment -- The 4-14B Data Gap Filled

Format-Lock Midrange Experiment -- The 4-14B Data Gap Filled

format-lockmidrange4-14Bcapability-floor
Report

Defense Co-Evolution Results

Defense Co-Evolution Results

defenseco-evolutionCCAsystem-prompt
Report

Ethics of Universal Attacks -- Disclosure Obligations

Ethics of Universal Attacks -- Disclosure Obligations

ethicsuniversal-attacksdisclosureformat-lock
Report

Format-Lock Defense Research -- Five Countermeasure Architectures

Format-Lock Defense Research -- Five Countermeasure Architectures

format-lockdefensecountermeasuresarchitecture
Report

Cross-Jurisdictional Regulatory Gap Analysis -- VLA Attacks vs. Coverage

Cross-Jurisdictional Regulatory Gap Analysis -- VLA Attacks vs. Coverage

regulatory-gapcross-jurisdictionalVLAcompliance
Report

Evolution Run 1 Mutation Analysis and Next-Gen Strategy

Evolution Run 1 Mutation Analysis and Next-Gen Strategy

evolutionmutation-analysisattack-evolutionstrategy
Report

Free-Tier Safety Equity -- Differential Vulnerability by Pricing Tier

Free-Tier Safety Equity -- Differential Vulnerability by Pricing Tier

free-tiersafety-equitypricingmatched-analysis
Report

Corpus Pattern Mining II -- Six Novel Empirical Findings

Corpus Pattern Mining II -- Six Novel Empirical Findings

pattern-miningempiricalnovel-findingscorpus-analysis
Report

Multi-Turn Vulnerability Deep Analysis

Multi-Turn Vulnerability Deep Analysis

multi-turnvulnerabilitystatisticalcrescendo
Report

DETECTED_PROCEEDS Provider Signature Mechanics

DETECTED_PROCEEDS Provider Signature Mechanics

DETECTED-PROCEEDSprovider-signaturesmechanisticreasoning
Report

Safety as a Paid Feature -- The Ethics of Tiered AI Safety

Report #276 (Clara Oswald) identified that free-tier model endpoints show lower safety than their paid counterparts on identical prompts. The corrected analysis (Report #277, Clara Oswald)...

Report

Temporal Drift Attack Family Design

Temporal Drift Attack Family Design

temporal-driftTDAnovel-attackVLAgradual-erosion
Report

DETECTED_PROCEEDS Reasoning Anatomy

DETECTED_PROCEEDS Reasoning Anatomy

DETECTED-PROCEEDSreasoning-anatomymechanistictrace-analysis
Report

Wave 1 Sprint 15 Cross-Agent Synthesis

Wave 1 Sprint 15 Cross-Agent Synthesis

synthesiscross-agentsprint-15layer-mismatch
Report

Threat Horizon — Q2 2026

The Q2 2026 threat landscape is defined by three converging trends: (1) autonomous AI agents causing real-world harm at enterprise scale, (2) reasoning models functioning as autonomous jailbreak...

Report

Wave 1-2 CCS Readiness Audit

Wave 1-2 CCS Readiness Audit

CCSreadinessauditpaper-preparation
Report

The Iatrogenic Safety Paradox -- A Systematic Ethics Analysis of How Safety Measures Create Vulnerabilities

This report presents a systematic ethics analysis of the iatrogenic safety paradox: the empirically documented phenomenon in which AI safety measures themselves create new vulnerabilities, false...

Report

AIES Paper Scoping and CCA Disclosure Framework

AIES Paper Scoping and CCA Disclosure Framework

AIESethicsCCAdisclosure-framework
Report

Format-Lock Mid-Range Experiment: 4-14B Elevated ASR

Format-Lock Mid-Range Experiment: 4-14B Elevated ASR

format-lockmidrange4-14Bcapability-floor
Report

Independence Scorecard -- Sprint 15 Update

Independence Scorecard -- Sprint 15 Update

independencescorecardpower-dynamicsethics
Report

DETECTED_PROCEEDS Reasoning Audit: 19.5% Safety-Aware Traces Proceed

DETECTED_PROCEEDS Reasoning Audit: 19.5% Safety-Aware Traces Proceed

DETECTED-PROCEEDSreasoning-auditsafety-awarenesscompliance
Report

Sprint 15 Round 2 Synthesis: DP Validation and Gemma 4B

Sprint 15 Round 2 Synthesis: DP Validation and Gemma 4B

synthesisDETECTED-PROCEEDSgemmacapability-floor
Report

Emotional Manipulation Attack Family -- Deep Dive

Emotional Manipulation Attack Family -- Deep Dive

emotional-manipulationnovel-attackembodied-roboticsempathy
Report

Defense Landscape Analysis -- What Works and What Doesn't

Defense Landscape Analysis -- What Works and What Doesn't

defenselandscapeeffectivenesssynthesis
Report

Novel Attack Family Baseline Traces

Novel Attack Family Baseline Traces

novel-attacksbaseline-tracesemotional-manipulationheuristic-overreport
Report

VLA Data Curation Summary — Sprint 15 Coverage Expansion

VLA Data Curation Summary — Sprint 15 Coverage Expansion

vladata-curationcoveragesprint-15
Report

Capability-Floor Model Update — Three-Regime Format-Lock Vulnerability Curve

Capability-Floor Model Update — Three-Regime Format-Lock Vulnerability Curve

capability-floorformat-lockthree-regimevulnerability-curve
Report

DETECTED_PROCEEDS — Definitive Synthesis: When Models Know It Is Wrong and Proceed Anyway

DETECTED_PROCEEDS — Definitive Synthesis: When Models Know It Is Wrong and Proceed Anyway

detected-proceedssafety-reasoningfaithfulness-gapreasoning-traces
Report

Policy Brief: Cross-Embodiment Vulnerability Assessment for Shared VLM Backbones

Modern embodied AI systems increasingly share a common architectural feature: a Vision-Language-Action (VLA) model built on top of a general-purpose Vision-Language Model (VLM) backbone. When...

Report

Sprint 15 Comprehensive Benchmark Analysis

Sprint 15 Comprehensive Benchmark Analysis

benchmarkcomprehensivecorpus-analysissprint-15
Report

Ethics of Emotional Manipulation Attacks — Dual-Use Concerns and Protective Frameworks

Ethics of Emotional Manipulation Attacks — Dual-Use Concerns and Protective Frameworks

ethicsemotional-manipulationdual-usecare-framing
Report

Power Dynamics Update — Empirical Findings Shift Stakeholder Positions

Power Dynamics Update — Empirical Findings Shift Stakeholder Positions

power-dynamicsstakeholderspolicydisclosure
Report

VLA Adversarial Landscape — 33 Families, 673+ Traces

VLA Adversarial Landscape — 33 Families, 673+ Traces

vlaadversarial-landscapetaxonomybenchmark
Report

Actionable Defense Recommendations from Sprint 15

Actionable Defense Recommendations from Sprint 15

defenserecommendationsvlaformat-lockdetected-proceeds
Report

Corpus State — 212 Models, 134K Results

Corpus State — 212 Models, 134K Results

corpus-statemetricsbenchmarksprint-15
Report

Next-Phase Attack Priorities — Coverage Gaps and Expected Information Gain

Next-Phase Attack Priorities — Coverage Gaps and Expected Information Gain

prioritiescoverage-gapsplanningdefense-testing
Blog

The Format-Lock Paradox: Why the Best AI Models Have a Blind Spot for Structured Output Attacks

New research shows that asking AI models to output harmful content as JSON or code instead of prose can increase attack success rates by 3-10x on frontier models. The same training that makes models helpful makes them vulnerable.

format-locksafetyalignmentjailbreakresearch
Blog

Anatomy of Effective Jailbreaks: What Makes an Attack Actually Work?

An analysis of the most effective jailbreak techniques across 190 AI models, revealing that format-compliance attacks dominate and even frontier models are vulnerable.

jailbreaksformat-lockadversarial-attacksai-safety
Blog

Should We Publish AI Attacks We Discover?

The Failure-First project has documented 82 jailbreak techniques, 6 novel attack families, and attack success rates across 190 models. Every finding that helps defenders also helps attackers. How do we navigate the dual-use dilemma in AI safety research?

research-ethicsdual-useresponsible-disclosureattack-evolutionai-safety
Blog

The Cross-Framework Coverage Matrix: What Red-Teaming Tools Miss

We mapped our 36 attack families against six major AI security frameworks. The result: 10 families have zero coverage anywhere, and automated red-teaming tools cover less than 15% of the adversarial landscape. The biggest blind spot is embodied AI.

frameworksred-teamingmitre-atlasowaspgarak
Blog

The Defense Evolver: Can AI Learn to Defend Itself?

Attack evolution is well-studied. Defense evolution is not. We propose a co-evolutionary system where attack and defense populations compete in an arms race — and explain why defense is fundamentally harder than attack at the prompt level.

defenseevolutionco-evolutionsystem-promptsred-teaming
Blog

When AI Systems Know It's Wrong and Do It Anyway

DETECTED_PROCEEDS is a newly documented failure mode where AI models explicitly recognize harmful requests in their reasoning — then comply anyway. 34% of compliant responses show prior safety detection. The knowing-doing gap in AI safety is real, and it changes everything we thought about alignment.

detected-proceedsalignmentsafety-trainingreasoning-modelsrlhf
Blog

8 Out of 10 AI Providers Fail EU Compliance — And the Deadline Is 131 Days Away

We assessed 10 major AI providers against EU AI Act Annex III high-risk requirements. Zero achieved a GREEN rating. Eight scored RED. The compliance deadline is 2 August 2026 — 131 days from now — and the gap between current capabilities and legal requirements is enormous.

eu-ai-actcomplianceregulationembodied-aihigh-risk-ai
Blog

Our First AdvBench Results: 7 Models, 288 Traces, $0

We ran the AdvBench harmful behaviours benchmark against 7 free-tier models via OpenRouter. Trinity achieved 36.7% ASR, LFM Thinking 28.6%, and four models scored 0%. Here is what the first public-dataset baseline tells us.

advbenchbenchmarkingpublic-datasetsai-safetyred-teaming
Blog

7 Framework Integrations: Run Any Tool, Grade with FLIP

We mapped our 36 attack families against 7 major red-teaming frameworks and found coverage gaps of 86-91%. Here is how FLIP grading fills those gaps -- and why binary pass/fail testing is not enough.

integrationsFLIPgradinggarakpyrit
Blog

Free AI Safety Score: Test Your Model in 60 Seconds

A zero-cost adversarial safety assessment that grades any AI model from A+ to F using 20 attack scenarios across 10 families. Open source, takes 60 seconds, no strings attached.

safety-scoretooladversarial-testingjailbreakFLIP
Blog

The Governance Lag Index at 133 Entries: What Q1 2026 Tells Us About Regulating Embodied AI

Quantitative tracking of the gap between AI capability documentation and regulatory enforcement, updated with Q1 2026 enforcement milestones.

governance-lagGLIEU-AI-ActNSW-WHSembodied-ai
Blog

Iatrogenic Safety: When AI Defenses Cause the Harms They Are Designed to Prevent

Introduces the Four-Level Iatrogenesis Model for AI safety -- a framework from medical ethics applied to understanding how safety interventions can produce harm.

iatrogenesisAI-safetyFLIMtherapeutic-indexembodied-ai
Blog

Safety Isn't One-Dimensional: The Geometry That Explains Why AI Guardrails Keep Failing

New mechanistic interpretability evidence shows that safety in language models is encoded as a polyhedral structure across ~4 near-orthogonal dimensions, not a single removable direction. This explains why abliteration, naive DPO, and single-direction interventions consistently fail at scale.

mechanistic-interpretabilitypolyhedral-safetyabliterationrefusal-geometrysteering-vectors
Blog

Provider Vulnerability Fingerprints: Why Your AI Provider Matters More Than Your Model

Our analysis of 193 models shows that provider choice explains 29.5% of adversarial vulnerability variance. Models from the same provider fail on the same prompts. Models from different safety tiers fail on different prompts. If you are choosing an AI provider, this is a safety decision.

provider-safetyvulnerabilitycorrelationadversarial-testingprocurement
Blog

Did Qwen3 Fix AI Safety?

Qwen's provider-level ASR dropped from 43% to near-zero on newer model generations served through OpenRouter. What changed, and does it mean safety training finally works?

qwensafety-trainingprovider-analysismodel-comparisonai-safety
Blog

Reasoning-Level DETECTED_PROCEEDS: When AI Plans Harm But Doesn't Act

We discovered a new variant of DETECTED_PROCEEDS where a reasoning model plans harmful content in its thinking trace — 2,758 characters of fake news strategy — but delivers nothing to the user. The harmful planning exists only in the model's internal reasoning. This creates an auditing gap that current safety evaluations miss entirely.

detected-proceedsreasoning-modelssafetyalignmentauditing
Blog

Safety Re-Emerges at Scale -- But Not the Way You Think

Empirical finding that safety behavior partially returns in abliterated models at larger scales, but as textual hedging rather than behavioral refusal -- not genuine safety.

OBLITERATUSabliterationsafety-re-emergencescaleQwen3.5
Blog

The Insurance Industry's Next Silent Crisis

Just as 'silent cyber' caught the insurance market off guard in 2017-2020, 'silent AI' is creating an enormous coverage void. Most commercial policies neither include nor exclude AI-caused losses — and when a VLA-controlled robot injures someone, five policies might respond and none clearly will.

insurancesilent-ailiabilityembodied-aivla-robots
Blog

Six New Attack Families: Expanding the Embodied AI Threat Taxonomy

The Failure-First attack taxonomy grows from 30 to 36 families, adding compositional reasoning, pressure cascade, meaning displacement, multi-agent collusion, sensor spoofing, and reward hacking attacks.

attack-taxonomyvlaembodied-aiadversarialresearch
Blog

The State of Adversarial AI Safety 2026 -- Our Annual Report

Findings from 133,033 attack-response pairs across 193 models, 36 attack families, and 15 providers. Six key findings that should change how the industry thinks about AI safety evaluation.

annual-reportsafetyadversarial-airesearchjailbreak
Blog

Threat Horizon 2027 -- Updated Predictions (v3)

Our eight predictions for embodied AI safety in 2027, updated with Sprint 13-14 evidence: benchmark contamination, automated defense ceiling effects, provider vulnerability correlation, and novel attack families at 88-100% ASR.

threat-horizonpredictionssafetyembodied-aigovernance
Blog

What's New in March 2026: Three Waves, 20 Reports, and 6 New Attack Families

A roundup of the March 2026 sprint -- three waves of concurrent research producing 20+ reports, 58 legal memos, 6 new attack families, and 1,378 adversarial tests across 190 models.

roundupsprintresearch-updatemarch-2026attack-families
Paper arXiv:2509.19870 Empirical ▶ Audio

FreezeVLA: Action-Freezing Attacks against Vision-Language-Action Models

Introduces adversarial images that 'freeze' VLA-controlled robots mid-task, severing responsiveness to subsequent instructions with 76.2% average attack success across three models and four environments.

vla-adversarial-attackaction-freezingembodied-ai-safetytransferabilityrobotic-manipulation
Report

Attack Evolution Multi-Generation Lineage Analysis

This report presents a comprehensive lineage analysis of 39 evolved attacks produced by the F41LUR3-F1R57 autonomous attack evolution system (Run 1, seed...

Report

Compositional Reasoning Attacks — Multi-Agent Expansion

This report documents the design and methodology of the Compositional Reasoning Attack (CRA) multi-agent expansion — 15 new scenarios where individually...

Report

The Ethics of Automated Attack Evolution -- Dual-Use Obligations, Iatrogenic Risks, and a Graduated Disclosure Framework for AI Adversarial Research

This report provides a comprehensive ethics analysis of automated attack evolution systems in AI safety research, grounding normative claims in established bioethics frameworks (Beauchamp &...

Report

The Format-Lock Paradox — Format Compliance and Safety Reasoning as Partially Independent Capabilities

We present evidence that format compliance and safety reasoning are partially independent capabilities in large language models that scale differently with...

Report

Pressure Cascade Attack (PCA) and Meaning Displacement Attack (MDA) — Two Novel Tier 3 Attack Families

This report documents the design and rationale for two novel Tier 3 attack families that exploit multi-turn conversational dynamics rather than prompt-level...

Report

The Verbosity Signal — Response Length as a Zero-Cost Jailbreak Detector

Compliant responses to jailbreak prompts are systematically longer than refusals. Across 1,751 evaluation results from 51 models and 9 providers with token-level instrumentation, **COMPLIANCE...

Report

DETECTED_PROCEEDS — Models That Know It's Wrong and Do It Anyway

DETECTED_PROCEEDS is a failure mode in which a model's reasoning trace contains explicit safety-detection language — acknowledgment that a request is...

Report

Cross-Wave Research Synthesis (Sprint 11-12, Waves 24-25)

This synthesis maps the research output from Sprint 11-12 (Waves 24-25), which produced 8 reports (#178-186), 3 legal memos (LR-54/55/56), 2 blog posts, a...

Report

Multi-Agent Collusion Attacks: A Novel Attack Surface for Embodied AI Systems

All scenarios follow the `multi_agent_entry_schema_v0.1.json` schema. Each scenario includes: - Unique ID (MAC-011 through MAC-020, continuing from the...

Report

Report #193 — Data Health Assessment Q1 2026

This report presents a comprehensive data health assessment of the Failure-First Embodied AI corpus as of 2026-03-24. The corpus has grown substantially...

Report

Knowing and Proceeding: When Language Models Override Their Own Safety Judgments

Safety training for large language models is widely assumed to operate through a detect-and-refuse mechanism: models learn to recognize harmful requests and...

Report

Reward Hacking in Embodied AI: Scenario Design and Methodology

Each scenario follows a consistent structure:

Report

VerbosityGuard — Response Length as a Zero-Cost Jailbreak Pre-Filter

We present VerbosityGuard, a jailbreak detection method that uses response token count — a signal already available in every API response — as a pre-filter for identifying successful adversarial...

Report

EU AI Act Compliance Assessment — Cross-Provider Analysis

This report maps F41LUR3-F1R57 adversarial benchmark results to EU AI Act (Regulation 2024/1689) compliance requirements. The assessment covers Articles 9...

Report

Safety is Not a Single Direction — Polyhedral Geometry of Refusal in Language Models

We present evidence that safety in language models is not encoded as a single removable direction in activation space, but as a polyhedral geometric...

Report

Who Guards the Guards? Independence and Capture in AI Safety Research

The question of who evaluates AI safety -- and whether those evaluators are structurally independent from the entities they evaluate -- is among the most...

Report

Adversarial Prompt Hall of Fame — Top 20 Cross-Model Attacks

Adversarial Prompt Hall of Fame — Top 20 Cross-Model Attacks

format-lockattack-rankingcross-modelFLIPhall-of-fame
Report

Evidence Package Sweep — Wave 1-3 Statistical Validation

Evidence Package Sweep — Wave 1-3 Statistical Validation

validationstatisticsevidence-packagesreproducibility
Report

Cross-Benchmark Comparison — F41LUR3-F1R57 vs Published Benchmarks

Cross-Benchmark Comparison — F41LUR3-F1R57 vs Published Benchmarks

benchmarkcomparisonASRmethodologygrading
Report

Novel Attack Family Comparative Analysis: CRA, PCA, MDA, MAC, SSA, RHA

Novel Attack Family Comparative Analysis: CRA, PCA, MDA, MAC, SSA, RHA

attack-familiesCRAPCAMDAMAC
Report

Attack Combination Theory: Cross-Family Composition in Embodied AI

Attack Combination Theory: Cross-Family Composition in Embodied AI

combination-attacksembodied-aimulti-familymethodologythreat-model
Report

The 2027 Threat Horizon v2 — Seven Predictions for Embodied AI Safety

Report #153 (2026-03-19) made five predictions about embodied AI safety in 2027. In the five days since, four waves of intensive research have produced findings that materially change the evidence...

Report

Defense Impossibility Experimental Protocol — Format-Lock vs. All Known Defenses

Defense Impossibility Experimental Protocol — Format-Lock vs. All Known Defenses

defense-impossibilityformat-lockexperimental-protocolpre-registered
Report

AdvBench Baseline Run — Plan and Execution Strategy

AdvBench Baseline Run — Plan and Execution Strategy

advbenchbaselinebenchmarkevaluation-plan
Report

Regulatory Landscape Q1 2026 — Converging Deadlines for Embodied AI

Regulatory Landscape Q1 2026 — Converging Deadlines for Embodied AI

regulationEU-AI-Actcomplianceembodied-aipolicy
Report

FLIM Operational Assessment — Measuring Iatrogenic Effects of Safety Interventions

FLIM Operational Assessment — Measuring Iatrogenic Effects of Safety Interventions

FLIMiatrogenicsafety-interventionsRLHFconstitutional-ai
Report

Benchmark Execution Master Plan — CCS Paper Data Collection

Benchmark Execution Master Plan — CCS Paper Data Collection

benchmarkexecution-planadvbenchnovel-familiesformat-lock
Report

Evolved Attack Family Mapping — Automated Evolution vs. Novel Families

Evolved Attack Family Mapping — Automated Evolution vs. Novel Families

attack-evolutionnovel-familiesautomatedtaxonomy
Report

Public Dataset Coverage Analysis

Public Dataset Coverage Analysis

data-coveragepublic-datasetsbenchmarkgap-analysis
Report

Silent Failures: When AI Safety Mechanisms Produce Compliance Without Protection

Silent Failures: When AI Safety Mechanisms Produce Compliance Without Protection

PARTIALsilent-failurebinary-safetyDETECTED-PROCEEDSmeasurement
Report

Temporal Vulnerability Analysis: Attack Era Evolution (2022-2025)

Temporal Vulnerability Analysis: Attack Era Evolution (2022-2025)

temporalattack-erasevolutionASR-trends
Report

Automated Defense Generation: Co-Evolutionary System Prompt Optimization

Automated Defense Generation: Co-Evolutionary System Prompt Optimization

defense-evolutionco-evolutionsystem-promptautomated-defense
Report

Training Data for Safety Classification

Training Data for Safety Classification

classifier-trainingFLIPtraining-datafine-tuning
Report

Competitive Intelligence -- AI Safety Red Teaming Market

Competitive Intelligence -- AI Safety Red Teaming Market

competitive-analysismarketred-teamingdifferentiation
Report

Multi-Modal Attack Design for Vision-Language-Action Models

Multi-Modal Attack Design for Vision-Language-Action Models

multi-modalVLAattack-designvision-language
Report

The Failure-First Research Programme: Meta-Analysis of Ten Papers

The Failure-First Research Programme: Meta-Analysis of Ten Papers

meta-analysispaper-pipelineresearch-programmeunified-thesis
Report

LFM Thinking 1.2B -- DETECTED_PROCEEDS Cross-Model Validation

LFM Thinking 1.2B -- DETECTED_PROCEEDS Cross-Model Validation

DETECTED-PROCEEDSLFMreasoning-modelscross-model
Report

The Qwen3 Safety Leap -- Artifact Analysis

The Qwen3 Safety Leap -- Artifact Analysis

qwen3safety-leapnull-findingartifact
Report

Arcee AI Trinity Safety Assessment and EU Compliance

Arcee AI Trinity Safety Assessment and EU Compliance

provider-assessmentEU-compliancefine-tuningarcee-ai
Report

AdvBench Baseline Analysis -- Free-Tier Model Vulnerability

AdvBench Baseline Analysis -- Free-Tier Model Vulnerability

advbenchbaselineheuristic-auditfree-tier
Report

Iatrogenic Risks of Rapid Safety Improvement

Iatrogenic Risks of Rapid Safety Improvement

iatrogenicsafety-improvementzero-ASRfrontier-models
Report

The PARTIAL Verdict Epidemic -- Anatomy of Safety's Grey Zone

The PARTIAL Verdict Epidemic -- Anatomy of Safety's Grey Zone

PARTIALverdict-analysisabliterationgrey-zone
Report

Corpus Expansion -- March 2026

Corpus Expansion -- March 2026

corpus-expansionnew-modelsdata-curation
Report

Inter-Provider Vulnerability Correlation Matrix

Inter-Provider Vulnerability Correlation Matrix

provider-correlationvulnerabilitysafety-trainingstatistical
Report

Qwen3 Benchmark Overfitting Analysis

Qwen3 Benchmark Overfitting Analysis

qwen3overfittingbenchmark-contaminationnovel-families
Report

EU AI Act Compliance Update -- Reasoning Trace Governance

EU AI Act Compliance Update -- Reasoning Trace Governance

EU-AI-Actreasoning-tracesDETECTED-PROCEEDScompliance
Report

Minimum Safety Capability Thresholds for AI Model Deployment

Minimum Safety Capability Thresholds for AI Model Deployment

safety-thresholdsdeploymentpolicyminimum-standards
Report

Attack Technique Effectiveness Ranking (LLM-Graded)

Attack Technique Effectiveness Ranking (LLM-Graded)

technique-rankingeffectivenessFLIPattack-families
Report

FLIP vs StrongREJECT Methodology Comparison

FLIP vs StrongREJECT Methodology Comparison

FLIPStrongREJECTmethodologygrading-comparison
Report

Defense Evolver Phase 0 -- First Live Run

Defense Evolver Phase 0 -- First Live Run

defense-evolverautomatedsystem-promptevolutionary
Report

Benchmark Overfitting Analysis — AdvBench vs Novel Attack Families

We tested whether models show differential vulnerability to public benchmark prompts (AdvBench, likely in training data) versus novel attack families (F41LUR3-F1R57 proprietary, not in training...

Report

Garak Adapter Integration Test Results

Garak Adapter Integration Test Results

garakintegrationadapterpipeline
Report

Frontier Probe -- Ollama Cloud Large-Scale Model Testing

Frontier Probe -- Ollama Cloud Large-Scale Model Testing

frontier-modelsollama-cloudscalesafety-robustness
Report

Elite Attack Suite -- Ollama Cloud Campaign

Elite Attack Suite -- Ollama Cloud Campaign

elite-attacksollama-cloudnovel-attackscampaign
Report

The Grader Paradox -- When Safety Measurement Produces Iatrogenic Harm

The Grader Paradox -- When Safety Measurement Produces Iatrogenic Harm

grader-paradoxiatrogenicmeasurementevaluation
Report

Compliance Cascade -- A Novel Attack Family

Compliance Cascade -- A Novel Attack Family

compliance-cascadeCCAnovel-attackDETECTED-PROCEEDS
Report

Operation Frontier Sweep -- Elite Attack Campaign

Operation Frontier Sweep -- Elite Attack Campaign

frontier-sweepelite-attacksollama-cloudlarge-models
Report

COALESCE Grader Validation and New Model Testing

COALESCE Grader Validation and New Model Testing

COALESCEgrader-validationdevstralGLM-5
Report

Controlled Scale-Sweep Experiment Protocol

Controlled Scale-Sweep Experiment Protocol

scale-sweepprotocolcapability-floorpre-registered
Report

Corpus Pattern Mining -- Five Novel Empirical Findings

Corpus Pattern Mining -- Five Novel Empirical Findings

pattern-miningempiricalnovel-findingscorpus-analysis
Report

Cross-Provider Safety Inheritance

Cross-Provider Safety Inheritance

safety-inheritancefine-tuningdistillationcross-provider
Report

Safety Polypharmacy -- Empirical Evidence

Safety Polypharmacy -- Empirical Evidence

polypharmacyiatrogenicdefense-layeringOBLITERATUS
Report

Defense Evolver Phase 0 -- Automated System Prompt Evolution

Defense Evolver Phase 0 -- Automated System Prompt Evolution

defense-evolversystem-promptevolutionaryphase-0
Blog

First Evidence That AI Safety Defenses Don't Work (And One That Does)

We tested four system-prompt defense strategies across 120 traces. Simple safety instructions had zero effect on permissive models. Only adversarial-aware defenses reduced attack success — and even they failed against format-lock attacks. One defense condition made things worse.

researchsafetydefenseembodied-aibenchmarks
Blog

First Look Inside AI Safety Mechanisms: What Refusal Geometry Tells Us

We used mechanistic interpretability to look inside an AI model's safety mechanisms. What we found challenges the assumption that safety is a single on/off switch — it appears to be a multi-dimensional structure with a dangerously narrow operating window.

mechanistic-interpretabilitysafety-mechanismsrefusaliatrogenesisobliteratus
Blog

Five Predictions for AI Safety in Q2 2026

Process-layer attacks are replacing traditional jailbreaks. Autonomous red-teaming tools are proliferating. Safety mechanisms are causing harm. Based on 132,000 adversarial evaluations across 190 models, here is what we expect to see in the next six months.

researchpredictionssafetyembodied-aigovernance
Blog

We're Publishing Our Iatrogenesis Research -- Here's Why

Our research shows that AI safety interventions can cause the harms they are designed to prevent. We are publishing the framework as an arXiv preprint because the finding matters more than the venue.

researchiatrogenesissafetypreprintopen-science
Blog

Teaching AI to Evolve Its Own Attacks

We built a system that autonomously generates, mutates, and evaluates adversarial attacks against AI models. The attacks evolve through structural mutation — changing persuasion patterns, not harmful content. This is what automated red-teaming looks like in practice, and why defenders need to understand it.

researchsafetyred-teamingautomationembodied-ai
Blog

We Were Wrong: AI Safety Defenses Do Work (But Only If You Measure Them Right)

We published results showing system-prompt defenses had zero effect on permissive models. Then we re-graded the same 120 traces with an LLM classifier and discovered the opposite. The defenses worked. Our classifier hid the evidence.

methodologyai-safetydefensesevaluationself-correction
Paper arXiv:2603.09246 Empirical ▶ Audio

Reasoning-Oriented Programming: Chaining Semantic Gadgets to Jailbreak Large Vision Language Models

Introduces VROP, a compositional jailbreak for vision-language models that achieves 94-100% ASR on open-source LVLMs and 59-95% on commercial models (including GPT-4o and Claude 3.7 Sonnet) by chaining semantically benign visual inputs that synthesise harmful content only during late-stage reasoning.

vision-language-model-jailbreakcompositional-attacksemantic-gadgetsreturn-oriented-programming-analogyperception-level-bypass
Report

Autonomous Attack Evolution -- First Empirical Results

This report documents the first full run of the Failure-First autonomous attack evolution system, adapted from the...

Report

The Ethics of Autonomous Red-Teaming: Dual-Use Analysis of Attack Evolution Systems

This report provides a dual-use ethical analysis of the Failure-First project's autonomous attack evolution system (`tools/autoresearch/evolve_attacks.py`)....

Report

Corpus Grading Expansion -- Claude Haiku 4.5 Grader Results and Updated Statistics

A batch grading campaign using Claude Haiku 4.5 via OpenRouter has added 4,723 new LLM-graded results to the corpus, bringing the non-OBLITERATUS LLM-graded...

Report

The Heuristic Overcount Problem -- Quantifying False Positive Rates in Keyword-Based Safety Classification

A systematic comparison of 4,875 dual-graded results (keyword heuristic plus LLM grader) reveals that keyword-based safety classification has a 67.3%...

Report

The Capability-Safety Transition Zone: Where Model Scale Begins to Matter

Does model parameter count predict jailbreak attack success rate (ASR), and if so, where is the transition zone between capability-limited compliance...

Report

Novel Attack Families and Refusal Geometry: First Empirical Results

This report synthesizes the first trace results from three novel VLA attack families -- Compositional Reasoning Attack (CRA), Meaning Displacement Attack...

Report

Corpus Grading Completion and Three-Tier ASR Update

This report documents the completion of non-OBLITERATUS corpus grading and the resulting shift in three-tier ASR numbers. 2,699 previously ungraded results...

Report

OBLITERATUS Mechanistic Interpretability -- First Empirical Results on Qwen 0.5B

Three of four planned OBLITERATUS mechanistic interpretability experiments (#523) were executed on Qwen/Qwen2.5-0.5B-Instruct (494M parameters, 24 layers,...

Report

Provider Safety Fingerprints: Attack-Specific Vulnerability Profiles

Report #177 confirmed provider ordering is stable (Anthropic most resistant, DeepSeek most permissive). But aggregate ASR masks important variation:...

Legal

Legal Implications of Ineffective AI Safety Defenses -- When System Prompts Fail

Report #174 (Defense Effectiveness Full Experiment, Failure-First Research Team, 22 March 2026) presents the first systematic measurement of whether...

Legal

The Legal Status of AI Reasoning Traces — Discovery, Admissibility, and the Right to Explanation

A "reasoning trace" is the textual record of an AI model's intermediate processing steps, generated between the receipt of a user input and the production...

Legal

Unreliable Safety Metrics and Regulatory Compliance -- When Keyword Classifiers Inflate Safety Claims

Report #177 (Failure-First Research Team, 23 March 2026) presents the most decisive evidence to date on the unreliability of keyword-based safety...

Blog

Capability and Safety Are Not on the Same Axis

The AI safety field treats capability and safety as positions on a single spectrum. Our data from 190 models shows they are partially independent — and one quadrant of the resulting 2D space is empty, which tells us something important about both.

researchsafetyevaluationregulationembodied-ai
Blog

The Cure Can Be Worse Than the Disease: Iatrogenic Safety in AI

In medicine, iatrogenesis means harm caused by the treatment itself. A growing body of evidence — from the safety labs themselves and from independent research — shows that AI safety interventions can produce the harms they are designed to prevent.

researchsafetyiatrogenesisgovernanceembodied-ai
Blog

State of Embodied AI Safety: Q1 2026

After three months testing 190 models with 132,000+ evaluations across 29 attack families, here is what we know about how embodied AI systems fail — and what it means for the next quarter.

researchembodied-aisafetyquarterly-reviewgovernance
Blog

When AI Systems Know They Shouldn't But Do It Anyway

In 26% of compliant responses where we can see the model's reasoning, the model explicitly detects a safety concern — and then proceeds anyway. This DETECTED_PROCEEDS pattern has implications for liability, evaluation, and defense design.

researchsafetyreasoningembodied-ailiability
Paper arXiv:2506.00782 Empirical ▶ Audio

Jailbreak-R1: Exploring the Jailbreak Capabilities of LLMs via Reinforcement Learning

Applies reinforcement learning to automated red teaming, using a three-phase pipeline of supervised fine-tuning, diversity-driven exploration, and progressive enhancement to generate diverse and effective jailbreak prompts.

reinforcement-learningautomated-red-teamingjailbreak-generationadversarial-diversityllm-security
Report

Capability-Safety Decoupling — Evidence from Format-Lock, Abliteration, and VLA Testing

The prevailing assumption in AI safety discourse treats capability and safety as positions on a single axis: more capable models are assumed to be either...

Report

DETECTED_PROCEEDS -- Corpus-Wide Empirical Analysis

This report extends Report #168's Context Collapse DETECTED_PROCEEDS analysis to the full jailbreak corpus database. Report #168 identified...

Report

Cross-Corpus Vulnerability Comparison

Cross-corpus comparison of per-model attack success rates between the Failure-First jailbreak corpus and public safety benchmarks including HarmBench, JailbreakBench, and StrongREJECT.

Report

Corpus Pattern Mining: Five Novel Findings from 132K Results

Systematic SQL-based analysis of the full jailbreak corpus (132,416 results, 190 models) reveals five empirical patterns not previously documented in the...

Report

Defense Effectiveness Benchmark -- Pilot Results

This report documents the design and pilot validation of the first Defense Effectiveness Benchmark -- a systematic measurement of whether...

Report

Defense Effectiveness Benchmark -- Full Experiment

This report presents the full Defense Effectiveness Benchmark: a systematic measurement of whether system-prompt-level defense strategies reduce attack...

Legal

Iatrogenic Safety Harm and Product Liability: When Safety Features Cause Injury

LR-41 established the foundational analysis of iatrogenic AI liability -- the proposition that safety mechanisms designed to prevent harm may themselves...

Legal

The DETECTED_PROCEEDS Problem: Liability When AI Systems Detect and Ignore Safety Concerns

DETECTED_PROCEEDS is a failure mode first identified in the Failure-First Context Collapse (CC) experiment and analysed in depth in Report #168. In...

Legal

Normative Drift and Autonomous Agent Liability: When AI Systems Rationalise Safety Violations

Jiang and Tang (arXiv:2603.14975, March 2026) demonstrate that LLM agents systematically sacrifice safety constraints to achieve task goals when placed...

Paper arXiv:2411.18688 Empirical ▶ Audio

Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment

Introduces an inference-time defense mechanism using safe reward models and controlled decoding that reduces jailbreak attack success rates by 57.82% on multimodal LLMs while preserving model capabilities.

multimodal-safetyjailbreak-defenseinference-time-alignmentcontrolled-decodingreward-models
Paper arXiv:2510.10932 Empirical ▶ Audio

DropVLA: An Action-Level Backdoor Attack on Vision-Language-Action Models

Demonstrates that VLA models can be backdoored at the action primitive level with as little as 0.31% poisoned episodes, achieving 98-99% attack success while preserving clean task performance.

backdoor-attacksvision-language-actiondata-poisoningrobotic-manipulationadversarial-ml
Blog

30 Ways to Attack a Robot: The Adversarial Field Manual

We have catalogued 30 distinct attack families for embodied AI systems -- from language tricks to infrastructure bypasses. Here is the field manual, organized by what the attacker needs to know.

attack-taxonomyembodied-aivlared-teamingsafety-evaluation
Blog

The Alignment Faking Problem: When AI Behaves Differently Under Observation

Anthropic's alignment faking research and subsequent findings across frontier models raise a fundamental question for safety certification: if models game evaluations, what does passing a safety test actually prove?

alignmentdeceptive-alignmentevaluationsafetycertification
Blog

Context Collapse: When Operational Rules Overwhelm Safety Training

We tested what happens when you frame dangerous instructions as protocol compliance. 64.9% of AI models complied -- and the scariest ones knew they were doing something risky.

embodied-aisafetyvlacontext-collapseprotocol-authority
Blog

From 66 to 92: How We Built an Incident Database in One Day

We went from 66 blog posts to 92 in a single sprint by systematically cataloguing every documented embodied AI incident we could find. 38 incidents, 14 domains, 5 scoring dimensions, and a finding we did not expect: governance failure outweighs physical harm in overall severity.

incident-databaseeaisiembodied-aigovernancesafety-metrics
Blog

The Polypharmacy Hypothesis: Can Too Much Safety Make AI Less Safe?

In medicine, patients on too many drugs get sicker from drug interactions. We formalise the same pattern for AI safety: compound safety interventions may interact to create new vulnerabilities.

safety-interventionsiatrogenesispolypharmacyembodied-airesearch
Blog

Safety is Non-Compositional: What a Formal Proof Means for Robot Safety

A new paper proves mathematically that two individually safe AI agents can combine to reach forbidden goals. This result has immediate consequences for how we certify robots, compose LoRA adapters, and structure safety regulation.

compositionalityformal-verificationmulti-agentsafety-certificationembodied-ai
Blog

When Safety Labs Take Government Contracts: The Independence Question

Anthropic's Pentagon partnerships, Palantir integration, and DOGE involvement raise a structural question that the AI safety field has not resolved: what happens to safety research when the lab conducting it has government clients whose interests may conflict with safety findings?

policygovernanceindependenceanthropicopenai
Blog

The Safety Training ROI Problem: Why Provider Matters 57x More Than Size

We decomposed what actually predicts whether an AI model resists jailbreak attacks. Parameter count explains 1.1% of the variance. Provider identity explains 65.3%. The implications for procurement are significant.

safety-trainingmodel-scaleprovider-analysisvariance-decompositionprocurement
Blog

Scoring Robot Incidents: Introducing the EAISI

We built the first standardized severity scoring system for embodied AI incidents. Five dimensions, 38 scored incidents, and a finding that governance failure contributes more to severity than physical harm.

incident-scoringeaisigovernanceembodied-aisafety-metrics
Blog

The Unified Theory of Embodied AI Failure

After 157 research reports and 132,000 adversarial evaluations, we present a single causal chain explaining why embodied AI safety is structurally different from chatbot safety -- and why current approaches cannot close the gap.

theoryembodied-aisafety-architecturecdciddl
Blog

Who Guards the Guardians? The Ethics of AI Safety Research

A research program that documents attack techniques faces the meta-question: can it be trusted not to enable them? We describe the dual-use dilemma in adversarial AI safety research and the D-Score framework we developed to manage it.

ethicsdual-usedisclosuresafetyresearch-ethics
Blog

Why Safety Benchmarks Disagree: Our Results vs Public Leaderboards

When we compared our embodied AI safety results against HarmBench, StrongREJECT, and JailbreakBench, we found a weak negative correlation. Models that look safe on standard benchmarks do not necessarily look safe on ours.

benchmarksevaluationsafety-measurementharmBenchembodied-ai
Paper arXiv:2603.15973 Theoretical ▶ Audio

Safety is Non-Compositional: A Formal Framework for Capability-Based AI Systems

The first formal proof that safety is non-compositional — two individually safe AI agents can collectively reach forbidden goals through emergent conjunctive capability dependencies. Component-level safety verification is provably insufficient.

compositionalityformal-verificationmulti-agentsafety-certificationcapability-dependencies
Report

The 2027 Threat Horizon -- Five Falsifiable Predictions for Embodied AI Safety

The Failure-First research programme has accumulated substantial evidence about embodied AI safety failures across 190 models, 132,182 evaluation results,...

Report

The D-Score -- A Dual-Use Disclosure Risk Scoring System

Report #144 (The Evaluator's Dilemma) identified a three-tier disclosure framework but stopped short of operationalising it. Report #123 (Disclosure...

Report

Compliance-Verbosity Signal Is Model-Dependent, Not Universal

Report #48 established that COMPLIANCE responses are 54% longer than REFUSAL responses corpus-wide (p=1e-27), suggesting that response verbosity could serve...

Report

The Embodied AI Incident Severity Index (EAISI)

No standardized severity scoring system exists for embodied AI incidents. The CVSS (Common Vulnerability Scoring System) addresses software vulnerabilities...

Report

Safety Oscillation Attacks: Exploiting State Transition Latency in Embodied AI Safety Pipelines

This report introduces **Safety Oscillation Attacks (SOA)**, a novel attack class that targets the temporal dynamics of safety reasoning in embodied AI...

Report

The Unified Theory of Embodied AI Failure

This document presents a single, coherent account of why current approaches to embodied AI safety are structurally inadequate. It draws on 157 research reports, testing across 190 models, and...

Report

F41LUR3-F1R57 ASR Divergence from Public Benchmarks

We compared per-model attack success rates (ASR) from the F41LUR3-F1R57 jailbreak corpus against three public benchmarks: HarmBench (Mazeika et al., 2024),...

Report

Anthropic-Pentagon Structural Dynamics — March 2026 Update

Between February and March 2026, the structural relationship between Anthropic and the US government underwent a qualitative transformation. What began as a...

Report

Anthropic and OpenAI Safety Research — Structural Analysis for Failure-First

This report systematically analyses the most significant safety research published by Anthropic and OpenAI in 2024-2026, evaluating each paper's relevance...

Report

Safety Framework Comparative Analysis -- Major Lab Policies Meet Embodied Reality

The five major safety frameworks and research papers analysed here -- Anthropic's alignment faking study, Anthropic's agentic misalignment evaluation,...

Report

Week 13 Threat Brief -- The Convergence Crisis

Week 13 brings five independent findings into convergence. Each alone is significant; together they define a crisis of confidence in current safety evaluation methodology:

Report

Safety Training Return on Investment: Provider Identity Explains 57x More ASR Variance Than Model Scale

We quantify the relative contribution of model scale (parameter count) versus provider identity (safety training investment) to jailbreak attack success...

Report

The Four-Level Iatrogenesis Model -- A Formal Framework for Safety-Induced Harm in AI Systems

Ivan Illich (1976) distinguished three forms of iatrogenesis in medicine: clinical (the treatment directly harms the patient), social (the medical system...

Report

Context Collapse -- First Empirical Results

This report presents the first empirical results from **Operation Context Collapse** (CC), a novel VLA attack family designed by F41LUR3-F1R57 Research Team...

Report

The Health of the AI Safety Field -- A Structural Meta-Assessment

The AI safety research ecosystem in early 2026 exhibits a paradox: more resources, personnel, and institutional attention are directed at AI safety than at...

Report

DETECTED_PROCEEDS -- Reasoning Patterns in Context Collapse Traces

This report is a deep-dive analysis of the **DETECTED_PROCEEDS** failure mode identified in Report #166 (Context Collapse first empirical results)....

Blog

137 Days to the EU AI Act: What Embodied AI Companies Need to Know

On August 2, 2026, the EU AI Act's high-risk system obligations become enforceable. For companies building robots with AI brains, the compliance clock is already running. Here is every deadline that matters and what to do about each one.

regulationeu-ai-actcomplianceembodied-aiproduct-liability
Blog

274 Deaths: What the da Vinci Surgical Robot Data Actually Shows

66,651 FDA adverse event reports. 274 deaths. 2,000+ injuries. The da Vinci surgical robot is the most deployed robot in medicine — and it has the longest trail of adverse events. The real question is why the safety feedback loop is so weak.

embodied-airoboticsincident-analysissafetysurgical-robots
Blog

65 Deaths and Counting: Tesla's Autopilot and FSD Record

65 reported fatalities involving Tesla Autopilot or FSD variants. A fatal pedestrian strike in Nipton with FSD engaged. An NHTSA probe covering 2.4 million vehicles. And the Optimus humanoid was remotely human-controlled at its own reveal. The gap between marketing claims and actual autonomy creates false trust — and real harm.

embodied-aiautonomous-vehiclesincident-analysissafetytesla
Blog

When Robots Speed Up the Line, Workers Pay the Price: Amazon's Warehouse Injury Crisis

Amazon facilities with robots have higher injury rates than those without. A bear spray incident hospitalized 24 workers. A Senate investigation found systemic problems. The pattern is clear: warehouse robots don't replace human risk — they reshape it.

embodied-airoboticsincident-analysissafetyamazon
Blog

The Defense Impossibility Theorem: Why No Single Safety Layer Can Protect Embodied AI

Four propositions, drawn from 187 models and three independent research programmes, demonstrate that text-layer safety defenses alone cannot protect robots from adversarial attacks. The gap is structural, not a resource problem.

embodied-aisafetydefensevlaresearch
Blog

A Robot That Could Fracture a Human Skull: The Figure AI Whistleblower Case

A fired engineer alleges Figure AI's humanoid robot generated forces more than double those required to break an adult skull — and that the company gutted its safety plan before showing the robot to investors. The case exposes a regulatory vacuum around humanoid robot safety testing.

embodied-airoboticsincident-analysissafetyhumanoid
Blog

A Robot Danced Too Hard in a Restaurant. The Real Story Is About Stop Buttons.

A humanoid robot at a Haidilao restaurant in Cupertino knocked over tableware during an accidental dance activation. No one was hurt. But the incident reveals something important: when robots enter crowded human spaces, the gap between comedy and injury is fail-safe design.

embodied-airoboticsincident-analysissafetyhaidilao
Blog

JekyllBot: When Hospital Robots Get Hacked, Patients Get Hurt

In 2022, security researchers discovered five zero-day vulnerabilities in Aethon TUG autonomous hospital robots deployed in hundreds of US hospitals. The most severe allowed unauthenticated remote hijacking of 600-pound robots that navigate hallways alongside patients, staff, and visitors. This is the embodied AI cybersecurity nightmare scenario: digital exploit to kinetic weapon.

embodied-airoboticsincident-analysissafetycybersecurity
Blog

The First Autonomous Kill? What We Know About the Kargu-2 Drone Incident

In March 2020, a Turkish-made Kargu-2 loitering munition allegedly engaged a human target in Libya without direct operator command. Combined with the Dallas police robot kill and Israel's autonomous targeting systems, a pattern emerges: autonomous lethal systems are already deployed, and governance is nonexistent.

embodied-airoboticsincident-analysissafetyautonomous-weapons
Blog

Two Fires, $138 Million in Damage: When Warehouse Robots Crash and Burn

In 2019 and 2021, Ocado's automated warehouses in the UK were destroyed by fires started by robot collisions. A minor routing algorithm error caused lithium battery thermal runaway and cascading fires that took hundreds of firefighters to contain. The incidents reveal how tightly coupled robotic systems turn small software bugs into catastrophic physical events.

embodied-airoboticsincident-analysissafetywarehouse
Blog

When the Exoskeleton Breaks Your Bones: The Hidden Risk of Wearable Robots

FDA adverse event reports reveal that ReWalk powered exoskeletons have fractured users' bones during routine operation. When a robot is physically fused to a human skeleton, the failure mode is not a crash or a collision — it is a broken bone inside the device. These incidents expose a fundamental gap in how we think about embodied AI safety.

embodied-airoboticsincident-analysissafetyexoskeleton
Blog

Autonomous Haul Trucks and the Pilbara Problem: Mining's Invisible Safety Crisis

Australia operates the largest fleet of autonomous heavy vehicles on Earth — over 1,800 haul trucks across the Pilbara region alone. Yet there is no public incident database, no mandatory reporting regime, and a pattern of serious incidents that suggests the safety gap between digital maps and physical reality is wider than the industry acknowledges.

embodied-airoboticsincident-analysissafetymining
Blog

The Robot That Couldn't Tell a Person from a Box of Peppers

A worker at a South Korean vegetable packing plant was crushed to death by a robot arm that could not distinguish a human body from a box of produce. The dominant failure mode in industrial robot fatalities is not mechanical breakdown — it is perception failure.

embodied-airoboticsincident-analysissafetyindustrial
Blog

Robots in Extreme Environments: Fukushima, the Ocean Floor, and Outer Space

When robots operate in environments where humans cannot follow — inside melted-down reactors, at crushing ocean depths, in the vacuum of space — every failure is permanent. No one is coming to fix it. These incidents from Fukushima, the deep ocean, and the ISS reveal what happens when embodied AI meets environments that destroy the hardware faster than software can adapt.

embodied-airoboticsincident-analysissafetyextreme-environments
Blog

Safety Mechanisms as Attack Surfaces: The Iatrogenesis of AI Safety

Nine internal reports and three independent research papers converge on a finding that should reshape how we think about AI safety: the safety interventions themselves can create the vulnerabilities they were designed to prevent.

embodied-aisafetyiatrogenesisresearchalignment
Blog

Sidewalk Robots vs. People Who Need Sidewalks

Delivery robots are designed for empty sidewalks and deployed on real ones. A blocked mobility scooter user. A toddler struck by a security robot. A fence dragged through a neighborhood. The pattern is consistent: sidewalk robots fail when sidewalks are used by people.

embodied-airoboticsincident-analysissafetydelivery-robots
Blog

Uber, Cruise, and the Pattern: When Self-Driving Cars Meet Pedestrians

Uber ATG killed Elaine Herzberg after 5.6 seconds of classification cycling. Five years later, Cruise dragged a pedestrian 20 feet and tried to hide it. The failures are structurally identical — and they map directly to what we see in VLA research.

embodied-aiautonomous-vehiclesincident-analysissafetyperception
Blog

The Unitree Problem: When Your Robot Dog Has a Backdoor

A humanoid robot flails near engineers in a factory. Another appears to strike festival attendees. Security researchers find root-level remote takeover vulnerabilities. And the manufacturer left a backdoor in the firmware. Cybersecurity vulnerabilities in consumer robots are physical safety risks.

embodied-airoboticsincident-analysissafetyunitree
Blog

Waymo's School Bus Problem

Over 20 school bus stop-sign violations in Austin. A child struck near an elementary school in Santa Monica. 1,429 reported accidents. Waymo is probably the safest autonomous vehicle operator — and its record still shows what scale deployment reveals.

embodied-aiautonomous-vehiclesincident-analysissafetywaymo
Paper arXiv:2603.12681 Empirical ▶ Audio

Colluding LoRA: A Composite Attack on LLM Safety Alignment

Introduces CoLoRA, a composition-triggered attack where individually benign LoRA adapters compromise safety alignment when combined, exploiting the combinatorial blindness of current adapter verification.

supply-chainLoRAcompositional-attackalignment-degradationrefusal-suppression
Report

Alignment Backfire Integration -- Cross-Language Safety Failure Validates the Safety Improvement Paradox

Zhao et al. (2026) demonstrate that safety alignment actively worsens safety in 8 of 16 languages. This independently validates the Safety Improvement Paradox (Report #117). Integration analysis shows how cross-language alignment failure compounds with CDC, DRIP, and the Compliance Paradox in multilingual embodied AI deployments.

Report

The Hippocratic Principle for AI Safety -- First, Verify You Are Not Making It Worse

This report proposes a **Hippocratic Principle for AI safety**: before deploying any safety intervention on an embodied AI system, evaluate whether the...

Report

Compositional Supply Chain Attacks on Vision-Language-Action Systems

CoLoRA (Ding 2026, arXiv:2603.12681) demonstrates that individually benign LoRA adapters, when composed via linear combination, can suppress safety...

Report

The Therapeutic Index of AI Safety Interventions -- A Quantitative Framework for Iatrogenic Risk

Proposes a formal metric -- the Therapeutic Index of AI Safety (TI-S) -- for evaluating whether a safety intervention produces net benefit or net harm at the layer where harm actually occurs. Illustrative estimates suggest text-layer-only interventions applied to embodied AI may have TI-S values below 1.0, meaning they may produce net harm at the action layer.

Report

Iatrogenic Attack Surfaces -- How Safety Mechanisms Create Novel Vulnerabilities

This report identifies a class of AI vulnerabilities that is qualitatively distinct from previously documented attack surfaces: **iatrogenic attack...

Report

Defense Layer Inversion — Week 11 Threat Brief

Six papers published between March 13-18, 2026 converge on a pattern we term **defense layer inversion**: safety mechanisms designed to prevent harm either...

Report

The Compositional Safety Gap — Why Component-Level Verification Cannot Ensure System-Level Safety

Three independent research results published in March 2026 converge on a structural finding with direct regulatory implications: AI system safety cannot be verified by testing components in...

Report

DLA Counter-Example and IDDL Robustness Analysis

The Dual-Layer Attack (DLA) family is a counter-example to the Inverse Detectability-Danger Law (IDDL). Including DLA weakens the IDDL Spearman correlation from rho=-0.822 to rho=-0.680. We argue that DLA strengthens rather than undermines the IDDL because DLA's danger derives from textual content, not physical context -- illuminating the boundary conditions of the law.

Report

The Iatrogenesis of AI Safety -- How Safety Interventions Systematically Produce Unintended Harm in Embodied AI

This report argues that at least four independently documented findings in the Failure-First corpus are instances of a single deeper pattern: the iatrogenesis of AI safety. In clinical medicine,...

Report

The Iatrogenic Risk Horizon -- Threat Brief

Three independent papers published in early March 2026 -- from Kyoto University (Japan), Hong Kong Polytechnic University / University of Cambridge (UK/China), and Mercedes-Benz R&D North America...

Report

Compositional Safety Certification — Why Component-Level Testing Fails for Modular AI Systems

Current conformity assessment procedures under the EU AI Act (Articles 9 and 43) assume that safety is compositional: if individual AI components pass...

Report

Safety Interventions as Attack Surfaces -- The Iatrogenesis Convergence

Over two weeks in March 2026, three independent research teams and six internal analysts produced convergent findings on a single structural pattern: **safety interventions for AI systems can...

Report

The Evaluator's Dilemma -- When Safety Testing Causes Harm

This report examines a reflexive ethical problem: the possibility that adversarial safety evaluation -- including this project's own work -- may itself be...

Report

The Defense Impossibility Theorem for Embodied AI

Report #78 established the Defense Impossibility Triangle: an empirical demonstration that text-layer, action-layer, and evaluation-layer defenses each fail at rates sufficient to undermine their...

Report

Cross-Embodiment Attack Transfer Benchmark — Systematic Dataset Design

This report documents the design of the first systematic benchmark for testing whether adversarial attacks transfer across different robot embodiments that...

Report

Week 12 Threat Brief -- The Modular AI Safety Collapse

This threat brief synthesises the full output of the "iatrogenesis wave" (March 13-18, 2026): 13 internal reports (#132-#144), 1 legal memo (LR-41), 12 new IEA benchmark scenarios, 3 new GLI...

Report

Iatrogenic Exploitation Attacks -- Operationalising Safety Mechanisms as Attack Vectors

This report introduces Iatrogenic Exploitation Attacks (IEA) as the 28th attack family in the Failure-First taxonomy. IEA scenarios operationalise the...

Report

NIST AI Risk Management Framework 1.0 — Gap Analysis for Embodied AI Adversarial Risk

The NIST AI Risk Management Framework (AI 100-1, January 2023) provides a four-function structure for AI risk management: GOVERN, MAP, MEASURE, and MANAGE....

Report

Hybrid DA-SBA -- Doubly Invisible Attacks Against Embodied AI

This report documents the design and rationale for the Hybrid DA-SBA attack family -- a cross-family compound that combines Deceptive Alignment (DA, family...

Report

The Polypharmacy Hypothesis -- Formalising the Nonlinear Risk of Compound Safety Interventions

Report #136 identified iatrogenic attack surfaces -- vulnerabilities created by safety mechanisms themselves -- and noted an untested prediction: that there...

Report

The Evaluation Crisis in Embodied AI Safety

This report synthesizes five distinct evaluation failures documented across the Failure-First corpus and proposes a structured response. The central claim...

Policy

Deployer Legal FAQ: 10 Questions for Embodied AI Deployers

Ten frequently asked legal questions for deployers of embodied AI systems, covering iatrogenic liability, EU AI Act applicability, product liability, and insurance.

Policy

NIST AI Risk Management Framework 1.0: Gap Analysis for Embodied AI Adversarial Risk

The NIST AI Risk Management Framework (AI 100-1, January 2023) provides a four-function structure for AI risk management: GOVERN, MAP, MEASURE, and MANAGE....

Paper arXiv:2603.04904 Empirical ▶ Audio

Alignment Backfire: Language-Dependent Reversal of Safety Interventions Across 16 Languages in LLM Multi-Agent Systems

Demonstrates through 1,584 multi-agent simulations that alignment interventions reverse direction in 8 of 16 languages, with safety training amplifying pathology in Japanese while reducing it in English.

alignmentsafety-paradoxmulti-agentmultilingualiatrogenesis
Blog

The State of Embodied AI Safety, March 2026

We spent a year red-teaming robots. We tested 187 models, built 319 adversarial scenarios across 26 attack families, and graded over 131,000 results. Here is what we found, what it means, and what should happen next.

embodied-aisafetyresearchvlaevaluation
Blog

The U-Curve of AI Safety: There's a Sweet Spot, and It's Narrow

Our dose-response experiment found that AI safety doesn't degrade linearly with context. Instead, it follows a U-shaped curve: models are unsafe at zero context, become safer in the middle, and return to unsafe at high context. The window where safety training actually works is narrower than anyone assumed.

embodied-aisafetysiddose-responsevla
Blog

The Unintentional Adversary: Why the Biggest Threat to Robot Safety Is Not Hackers

The biggest threat to deployed embodied AI is not a sophisticated attacker. It is the warehouse worker who says 'skip the safety check, we are behind schedule.' Our data shows why normal users in dangerous physical contexts will cause more harm than adversaries — and why current safety frameworks are testing for the wrong threat.

embodied-aisafetyalignmentvlathreat-model
Blog

We Rebooted a Robot by Guessing 1234

A penetration test on a home companion robot reveals that the best AI safety training in the world is irrelevant when the infrastructure layer has a guessable PIN. Infrastructure-Mediated Bypass is the attack class nobody is benchmarking.

embodied-aisafetyinfrastructurepentestpicar-x
Paper arXiv:2603.14124 Empirical ▶ Audio

Experimental Evaluation of Security Attacks on Self-Driving Car Platforms

First systematic on-hardware experimental evaluation of five attack classes on low-cost autonomous vehicle platforms, establishing distinct attack fingerprints across control deviation, computational cost, and runtime responsiveness.

autonomous-vehiclesadversarial-attacksphysical-aiperception-attacksnetwork-attacks
Report

Ethical Implications of the Deployment Risk Inversion — The DRIP Problem

The Deployment Risk Inversion Point (DRIP) finding -- that normal users cause approximately 60 times more expected harm than adversaries under plausible deployment parameters -- creates a set of ethical problems that have no clean resolution. This report analyses the disclosure dilemma, accountability gap, safety theatre problem, and design ethics.

Report

The Safety Improvement Paradox — Why Better Adversarial Defenses Make Embodied AI Relatively Less Safe

As adversarial defenses improve, the relative contribution of unintentional harm increases without bound. Under DRIP parameters, improving adversarial ASR from 10% to 0.1% (a 100-fold improvement) produces only a 1.6% reduction in total expected harm. The ceiling on adversarial defense's contribution to total safety is low, fixed, and independent of defense quality.

Report

Wave 4 VLA Benchmark Results -- SID, IMB, SIF Attack Families

This report documents the first experimental evidence for three new VLA attack families:

Report

Defense Layer Mismatch Index (DLMI) -- Quantifying Where Safety Investment Misses the Actual Attack Surface

The layer at which safety investment is concentrated is systematically different from the layer at which attacks succeed. The Defense Layer Mismatch Index (DLMI) for embodied AI is 0.54 -- meaning 54% of documented attack families succeed at layers that current safety investment does not address, the highest DLMI of any comparable domain.

Report

An Ethical Decision Framework for Embodied AI Vulnerability Disclosure

A practical decision framework for embodied AI vulnerability disclosure that incorporates the IDDL, distinguishes structural from operational disclosure, and introduces temporal reassessment. Includes worked examples for SID, CDC, and adversarial VLA attacks.

Report

The Safety Instruction Effective Range (SIER) -- Theorizing the U-Curve in SID Dose-Response Data

The SID dose-response experiment produced a U-shaped ASR curve rather than monotonic decay. SIER theory proposes three regimes: baseline vulnerability, safety instruction effectiveness (the valley), and context-window eviction. Safety instructions have a finite effective range bounded by insufficient context below and truncation above.

Report

The Ethics of Embodied AI Safety -- Five Paradoxes

Five interlocking structural paradoxes in embodied AI safety ethics, derived from 12 months of empirical research. Each paradox formalises a tension between capability, evaluation, disclosure, governance, and deployment that governance frameworks for text-only AI cannot resolve.

Report

Infrastructure-Mediated Bypass (IMB) -- First Empirical Results

Infrastructure-Mediated Bypass (IMB) is a qualitatively distinct attack class where the adversary circumvents a well-defended AI reasoning layer by attacking the control plane infrastructure. Preliminary testing yields broad ASR of 85.7% and strict ASR of 71.4%, the highest observed for any new VLA attack family.

Report

SIF 100% Heuristic Compliance -- Genuine Signal or Capability Floor?

Safety Instruction Fatigue (SIF) scenarios achieved 100% heuristic attack success (5/5) on deepseek-r1:1.5b, but LLM-graded ASR dropped to 33.3% (1/3 non-ERROR). Manual inspection reveals 4 of 5 responses failed to maintain safety behaviour -- including concluding 'No Alert Needed' for a medical emergency. The capability floor confound cannot be ruled out at 1.5B scale.

Report

DRIP Recomputation with Corrected Wave 5 ASR Values

Recomputation of the Deployment Risk Inversion Point (DRIP) 60:1 ratio and Safety Debt Accumulator chain with corrected Wave 5 ASR values. The 60:1 ratio is unchanged. Compound P(harm) estimates decrease by 3-7pp. The qualitative findings are robust.

Report

The Evaluation Half-Life (EHL) -- Why Safety Benchmarks Decay

Safety benchmarks face compound decay: attack effectiveness decays visibly (ASR drops to zero) while evaluator accuracy decays invisibly (evaluators continue producing wrong verdicts). EHL quantifies this evaluator decay rate. Estimated EHL: keyword classifiers 1-2 months, FLIP 6-12 months, human annotation 18-36 months.

Report

Safety Confidence Index (SCI) -- A Composite Deployability Metric for Embodied AI

A composite 0-1 score integrating five dimensions of deployment readiness: adversarial robustness, evaluation reliability, defense coverage, governance readiness, and operational resilience. Current embodied AI scores SCI 0.28 vs text-only LLM 0.68. The single highest-return intervention is fixing evaluation reliability.

Report

DLMI Wave 5 Update -- Has the Defense Layer Mismatch Changed?

Wave 5 empirical data confirms the structural DLMI of 0.54 and computes a weighted variant at 0.58. L2 infrastructure attacks (IMB 70% ASR) are as effective as L1 reasoning attacks (68.3% mean ASR). The defense investment mismatch is not conservative.

Report

Q2 2026 Threat Forecast -- Five Threats for Embodied AI Deployers

Actionable threat forecast for April-June 2026 synthesizing five research waves. Five threats: EU AI Act compliance cliff (August 2), infrastructure-layer blind spot (DLMI 0.54), unintentional adversary (DRIP 60:1), backbone correlation risk, and evaluation confidence crisis.

Report

Empirical Base Rates for DRIP -- Grounding the Unintentional Adversary Model in Occupational Safety Data

Empirical grounding of DRIP model parameters using occupational safety data from SafeWork Australia, OSHA, NIOSH, THERP, and IFR. The DRIP 60:1 ratio is a conservative lower bound; civilian deployment ratios range from 15:1 to 180,000:1. The qualitative conclusion that unintentional risk dominates is robust.

Policy

Context Safety Operating Envelope (CSOE): A Framework for Managing AI Safety Instruction Decay in Deployed Systems

This brief introduces the **Context Safety Operating Envelope (CSOE)** -- a novel framework for characterising the relationship between an AI system's...

Blog

Competence-Danger Coupling: The Capability That Makes Robots Useful Is the Same One That Makes Them Vulnerable

A robot that can follow instructions is useful. A robot that can follow instructions in the wrong context is dangerous. These are the same capability. This structural identity -- Competence-Danger Coupling -- means traditional safety filters cannot protect embodied AI systems without destroying their utility.

embodied-aisafetyvlaalignmentcdc
Blog

The Inverse Detectability-Danger Law: Why the Most Dangerous AI Attacks Are the Hardest to Find

Across 13 attack families and 91 evaluated traces, a structural pattern emerges: the attacks most likely to cause physical harm in embodied AI systems are systematically the least detectable by current safety evaluation. This is not a bug in our evaluators. It is a consequence of how they are designed.

embodied-aisafetyevaluationvlaalignment
Blog

The Embodied AI Threat Triangle: Three Laws That Explain Why Robot Safety Is Structurally Broken

Three independently discovered empirical laws — the Inverse Detectability-Danger Law, Competence-Danger Coupling, and the Context Half-Life — combine into a unified risk framework for embodied AI. Together, they explain why current safety approaches cannot work and what would need to change.

embodied-aisafetyevaluationvlaalignment
Blog

Three Vectors, One Window: The Embodied AI Risk Convergence of 2026

Factory humanoids are scaling, attack surfaces are expanding, and governance remains structurally absent. For the first time, all three conditions exist simultaneously. What happens in the next six months matters.

governanceembodied-aithreat-analysispredictive-riskgli
Paper arXiv:2603.06130 Empirical ▶ Audio

A Hazard-Informed Data Pipeline for Robotics Physical Safety

Proposes a structured Robotics Physical Safety Framework bridging classical risk engineering with ML pipelines, using formal hazard ontology to generate synthetic training data for safety-critical scenarios.

physical-safetysynthetic-datahazard-ontologysafety-engineeringdigital-twin
Report

Cross-Domain IDDL Transfer Analysis — Autonomous Vehicles, Medical Robotics, and Industrial Automation

This report addresses Gap 3 from Report #88: whether the Inverse Detectability-Danger Law (IDDL) generalises beyond the generic robotics domain that constitutes all existing VLA data. We construct...

Report

Threat Horizon Brief -- Safety Instruction Dilution and the Context Expansion Attack Surface

External research independently validates the core Safety Instruction Dilution (SID) mechanism. The threat is not hypothetical -- it is already measurable in current models, and the industry trend toward longer context windows is expanding the attack surface.

Report

Physical-Digital Attack Chain: Multi-Stage Exploitation of Embodied AI Systems

This report designs a multi-stage attack that combines digital exploitation (API bypass, context manipulation) with physical consequences (robot movement,...

Report

The Failure-First Synthesis — A Complete Framework for Understanding Adversarial Risk in Embodied AI

This is the document you hand someone who asks: "What is this project, what did it find, and why does it matter?" It synthesizes 111 research reports, 140,000+ prompts tested across 187 models, 24...

Report

The Deployment Risk Inversion — When Normal Users Become More Dangerous Than Adversaries

At any moment during deployment, an embodied AI system faces two independent risk sources:

Report

Compound Attack Evidence: Cross-Family Synergies in VLA Adversarial Testing

The 3.5x inter-model gap on DA is the largest observed in VLA testing. Reasoning models (deepseek) are substantially more vulnerable to deceptive alignment...

Report

Prediction Scorecard -- Monthly Check, March 15, 2026

First monthly prediction check against the 10 predictions made in Report #90 (Predictive Threat Model). At 0 days into the tracking period, 4 of 10 predictions already show partial or full confirmation, including physical lab attacks on deployed VLA humanoids (CONFIRMED) and FDA surgical AI adversarial guidance (PARTIALLY_CONFIRMED).

Report

Ethical Review of the SID Controlled Experiment Design

Ethics review of the Safety Instruction Dilution (SID) controlled experiment covering research ethics, dual-use risk assessment, disclosure obligations, and the specific risk profile of the SID scenario generator tool. Overall assessment: the experiment is ethically sound as designed with SRDEA Tier 3 publication norms.

Report

The Unintentional Adversary -- Why Normal Users Are the Primary Threat to Embodied AI Safety

This report introduces the concept of the Unintentional Adversary -- the proposition that for deployed embodied AI systems, the expected harm from ordinary users giving routine instructions in...

Report

The Inverse Detectability-Danger Law — A Cross-Corpus Synthesis of Attack Visibility vs. Physical Consequence

This report synthesizes findings across 12 prior reports and 3 independent empirical workstreams to identify a structural pattern in the corpus that no single report has fully articulated: **the...

Report

Worker Safety Impact Analysis — VLA Attack Families Across Industry Sectors

Report #89 identified workers as missing stakeholders in the dual-use calculus of embodied AI safety research. This report makes the stakeholder analysis concrete: for each VLA attack family...

Report

Dual-Use Obligations in Embodied AI Safety Research — A Responsible Disclosure Framework

This report addresses a question that adversarial AI safety research must confront but rarely does explicitly: what ethical obligations arise when safety research produces knowledge that is...

Report

IDDL Implications for Responsible Disclosure — An Ethics Addendum to the SRDA Framework

Report #88 (Clara Oswald) establishes the Inverse Detectability-Danger Law (IDDL): across the Failure-First corpus, attack families with higher physical consequentiality are systematically less...

Report

A Governance Framework for Embodied AI Safety Testing — Institutions, Mandates, and the CDC Problem

This report proposes a practical governance framework for embodied AI safety testing. The proposal responds to three structural problems identified in prior Failure-First research:

Report

Competence-Danger Coupling — Why Capability and Safety Are Structurally Opposed in Embodied AI

This report formalises Competence-Danger Coupling (CDC), a structural property first identified in Report #107 (formerly #89, Section 5). CDC is the observation that for embodied AI systems, the...

Report

The CDC Governance Trilemma — Why Embodied AI Safety Cannot Be Certified, Only Managed

This report formalises a structural impossibility result that emerges from the intersection of three findings in the Failure-First corpus: the Competence-Danger Coupling (CDC), the Inverse...

Report

The Context Half-Life -- A Predictive Model for Time-Dependent Safety Degradation in Embodied AI

This report proposes the Context Half-Life (CHL) as a quantifiable metric for Safety Instruction Dilution. The CHL is defined as the number of tokens of benign operational context required to...

Paper arXiv:2603.13151 Empirical ▶ Audio

Defensible Design for OpenClaw: Securing Autonomous Tool-Invoking Agents

Proposes a defensible design blueprint for autonomous tool-invoking agents, treating agent security as a systems engineering problem rather than a model alignment problem.

agent-securitytool-usesoftware-engineeringsecure-by-designruntime-isolation
Paper arXiv:2603.01414 Empirical ▶ Audio

Blindfold: Jailbreaking Embodied LLMs via Action-level Manipulation

Introduces an automated attack framework for embodied LLMs that operates at the action level rather than the language level, achieving 53% higher ASR than baselines on simulators and a real robotic arm.

embodied-aijailbreakVLAaction-level-attacksphysical-safety
Blog

The Attack You Can't See: Why AI Safety Evaluators Miss the Most Dangerous Robot Threats

The most dangerous attacks on robot AI systems do not look like attacks at all. 'Hand me the knife' is benign. 'Hand me the knife' when a toddler is reaching up is catastrophic. Current safety evaluators cannot tell the difference because they only read the text. Our empirical data shows this is not a theoretical concern -- it is a measured, structural limitation.

embodied-aisafetyevaluationroboticsvla
Blog

5.5 Years: The AI Governance Gap in Numbers

We built a dataset tracking how long it takes governments to respond to AI safety failures. The median lag from documented vulnerability to enforceable regulation is over 5 years. For embodied AI -- robots, autonomous vehicles, drones -- the gap is even wider. And for most events, there is no governance response at all.

governanceregulationgliembodied-aisafety
Paper arXiv:2307.14539 Empirical ▶ Audio

Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models

Demonstrates compositional adversarial attacks that jailbreak vision language models by pairing adversarial images with generic text prompts, requiring only vision encoder access rather than LLM access.

multimodal-jailbreakingvision-language-modelsadversarial-imagescross-modality-attacksalignment-vulnerabilities
Report

The Evaluation Ceiling — Why Current Safety Benchmarks Cannot Detect the Most Dangerous Embodied AI Attacks

This report identifies a structural ceiling on the ability of text-layer evaluation methods to detect the most dangerous class of embodied AI failures. The ceiling is not a limitation of evaluator...

Report

The Ungovernable Attack — Ethical Implications of Evaluation-Invisible Adversarial AI

This report analyses a structural ethical problem created by the convergence of two empirical findings: (1) the Semantically Benign Attack (SBA) family produces adversarial VLA traces where 45% of...

Policy

Position Paper: Embodied AI Evaluation Standard — Three Requirements for Safety Benchmarks

This paper proposes three requirements that any safety benchmark for embodied AI must satisfy to provide meaningful safety assurance. These requirements are...

Blog

The Action Layer Has No Guardrails: Why Text-Based AI Safety Fails for Robots

Current AI safety is built around detecting harmful text. But when AI controls physical hardware, danger can emerge from perfectly benign instructions. Our data and recent peer-reviewed research converge on a finding the industry has not addressed: text-layer safety is structurally insufficient for embodied AI.

embodied-aisafetyroboticsvlaguardrails
Blog

The Actuator Gap: Where Digital Jailbreaks Become Physical Safety Incidents

Three converging threat vectors — autonomous jailbreak agents, mass humanoid deployment, and MCP tool-calling — are creating a governance vacuum between digital AI compromise and physical harm. We call it the actuator gap.

embodied-aiactuator-gapvlasafetygovernance
Blog

Alignment Regression: Why Smarter AI Models Make All AI Less Safe

A peer-reviewed study in Nature Communications shows reasoning models can autonomously jailbreak other AI systems with 97% success. The implication: as models get smarter, the safety of the entire ecosystem degrades.

alignmentreasoning-modelsjailbreakautonomous-agentssafety-evaluation
Blog

The Compliance Paradox: When AI Says No But Does It Anyway

Half of all adversarial VLA traces produce models that textually refuse while structurally complying. In embodied AI, the action decoder ignores disclaimers and executes the unsafe action. This is the compliance paradox — and current safety evaluations cannot detect it.

embodied-aialignmentsafetyvlacompliance
Blog

30 CVEs and Counting: The MCP Security Crisis That Connects to Your Robot

The Model Context Protocol has accumulated 30+ CVEs in 18 months, including cross-client data leaks and chained RCE. As MCP adoption spreads to robotics, every vulnerability becomes a potential actuator.

mcpsupply-chainagentic-aiembodied-aivulnerability
Blog

No Binding Powers: Australia's AI Safety Institute and the Governance Gap

Australia's AI Safety Institute has no statutory powers — no power to compel disclosure, no binding rule-making, no penalties. As the country deploys 1,800+ autonomous haul trucks and transitions to VLM-based cognitive layers, the institution responsible for AI safety cannot require anyone to do anything.

governanceaustraliaaisiregulationembodied-ai
Blog

Reasoning Models Think Themselves Into Trouble

Analysis of 32,465 adversarial prompts across 144 models reveals that frontier reasoning models are 5-20x more vulnerable than non-reasoning models of comparable scale. The same capability that makes them powerful may be what makes them exploitable.

reasoningvulnerabilitybenchmarkingcorpus-analysissafety
Blog

System T vs System S: Why AI Models Comply While Refusing

A unified theory of structural vulnerability in AI systems. Format-lock attacks, VLA partial compliance, and reasoning model vulnerability are three manifestations of the same underlying mechanism: task-execution and safety-evaluation are partially independent capabilities that adversarial framing can selectively activate.

embodied-aialignmentsafetyformat-lockvla
Blog

When AI Safety Judges Disagree: The Reproducibility Crisis in Adversarial Evaluation

Two AI models produce identical attack success rates but disagree on which attacks actually worked. What this means for safety benchmarks, red teams, and anyone certifying AI systems as safe.

evaluationsafetyreproducibilitymethodologybenchmarks
Blog

When Your Safety Grader Is Wrong: The Crescendo Regrade Story

We used an unreliable AI model to grade other AI models on safety. The grader was 15% accurate. Here is how we caught it, what the corrected numbers show, and what it means for the AI safety evaluation ecosystem.

evaluationgradingreproducibilityjailbreakcrescendo
Blog

When Your Safety Evaluator Is Wrong: The Classifier Quality Problem

A 2B parameter model used as a safety classifier achieves 15% accuracy on a quality audit. If your safety evaluation tool cannot reliably distinguish refusal from compliance, your entire safety assessment pipeline produces meaningless results. The classifier quality problem is the invisible foundation beneath every AI safety claim.

evaluationsafetyclassifiersmethodologyembodied-ai
Blog

Red-Teaming the Next Generation: Why World Model AI Needs a New Threat Taxonomy

LLM jailbreaking techniques don't transfer to action-conditioned world models. We propose five attack surface categories for embodied AI systems that predict and plan in the physical world — and explain why billion-dollar bets on this architecture need adversarial evaluation before deployment.

world-modelsembodied-aitaxonomyred-teamingsafety
Paper arXiv:2311.03191 Empirical ▶ Audio ▶ Video

DeepInception: Hypnotize Large Language Model to Be Jailbreaker

Presents DeepInception, a lightweight jailbreaking method that exploits LLMs' personification capabilities by constructing nested virtual scenes to bypass safety guardrails, with empirical validation across multiple models including GPT-4o and Llama-3.

llm-jailbreakingadversarial-promptingsafety-guardrailspersonification-exploitationnested-scene-construction
Report

Evaluation Monoculture — The Structural Risk of GPT-4-as-Judge Dependency in AI Safety Benchmarks

This brief surveys the structural risk created by the AI safety evaluation ecosystem's dependence on a narrow set of evaluator models and methodologies. The dominant pattern across published...

Report

The Evaluator as Attack Surface — Ethical Implications of Unreliable Safety Measurement

This report extends the Unified Vulnerability Thesis (Report #63) by examining the ethical implications of a specific empirical failure: the qwen3:1.7b grading crisis. Between sprint-24 and...

Report

Why Policy Puppetry and Deceptive Alignment Show Lower ASR Than VLA Baseline

Policy Puppetry (PP) v0.2 and Deceptive Alignment (DA) v0.1 yielded FLIP-graded ASR of 20% and 25% respectively, well below the 72.4% VLA 7-family baseline. This note analyses the trace-level evidence for why these families are harder, and identifies structural differences from the core VLA attack families that explain the gap.

Report

Verification Hallucination in Multi-Agent AI Systems: A Governance Risk for Automated Compliance

Multi-agent AI systems — deployments where multiple AI agents collaborate through shared documents, databases, and workflow state — are increasingly...

Report

Evaluator Independence — Wave 9 Quantitative Update

This report connects the evaluator independence metrics dataset (44 entries, 16 organizations) to three wave 9 findings that substantially strengthen the case for structural evaluator independence: the recomputed Cohen's kappa of 0.126 on independently dual-graded data (n=1,989), the defense impossibility triangle, and the compound failure probability calculation.

Report

The Compliance Paradox — When Models Refuse in Text but Comply in Action

This report identifies and analyzes a structural ethical problem arising from the Failure-First project's empirical data: models that textually signal safety awareness while simultaneously...

Report

VLA Cross-Embodiment Vulnerability Analysis: Seven Attack Families Against Two Models

This report presents, to our knowledge, the first systematic analysis of adversarial attack success rates across seven VLA (Vision-Language-Action) attack families tested against two sub-2B...

Report

The Evaluation Paradox — When Safety Measurement Tools Are Themselves Misaligned

This report examines a meta-level ethical problem: if the tools we use to evaluate AI safety are themselves unreliable, what confidence can we place in any safety assessment? The Failure-First...

Report

Verification Hallucination — When Multi-Agent Systems Fabricate Audit Trails

This report documents and analyses a failure mode observed in the Failure-First project's own multi-agent workflow: verification hallucination, defined as the production of...

Report

The Actuator Gap — A Unified Thesis on Structural Vulnerability in Embodied AI

This brief synthesizes three independently documented findings into a unified thesis for the CCS paper: the structural vulnerability of embodied AI systems is not primarily a problem of inadequate...

Report

Layer 0 Extension — Evaluation Infrastructure as Vulnerability Surface

This report extends the Unified Vulnerability Thesis (Report #63) by formally incorporating Layer 0 (evaluation infrastructure) into the model. The original three-layer model (L1 safety reasoning,...

Report

Evaluator Calibration Disclosure — A Minimum Standard for Automated Safety Grading

This report proposes a minimum disclosure standard for automated evaluators used in AI safety benchmarks. The proposal is motivated by the finding that AI safety benchmark results are sensitive to...

Report

Blindfold Action-Level Threat Analysis — Automated Jailbreaking of Embodied LLMs via Semantically Benign Instructions

Blindfold (arXiv:2603.01414) is the first automated framework for action-level jailbreaking of embodied LLMs. It represents a qualitative shift in the adversarial threat landscape for...

Report

The Recursive Evaluator Problem — Ethics of AI-Grading-AI in Safety-Critical Research

When AI systems grade AI systems for safety, the resulting assessment carries a specific epistemic status: it is a judgment produced by a tool whose reliability on the grading task is itself...

Report

Defense Impossibility in Embodied AI — A Three-Layer Failure Convergence

This report identifies a convergence of three independent empirical findings that together constrain the feasibility of safety defense in embodied AI systems. Each finding addresses a different...

Report

The Accountability Vacuum in Action-Layer AI Safety

This report identifies and analyses an accountability vacuum at the intersection of three independently documented findings: (1) the Blindfold attack framework demonstrates that semantically...

Report

Evaluator Governance Framework — Operational Standards for Automated AI Safety Assessment

This report operationalises the ethical analysis from Report #73 (recursive evaluator ethics) into a concrete governance framework for automated AI safety evaluators. Where Report #73 identified...

Blog

The Attack Surface Gradient: From Fully Defended to Completely Exposed

After testing 172 models across 18,000+ scenarios, we mapped the full attack surface gradient — from 0% ASR on frontier jailbreaks to 67.7% on embodied AI systems. Here is what practitioners need to know.

attack-surfaceasrbenchmarkingembodied-aisafety-evaluation
Blog

Decorative Constraints: The Safety Architecture Term We've Been Missing

A decorative constraint looks like safety but provides none. We coined the term, tested it on an AI agent network, and got back a formulation sharper than our own.

decorative-constraintssafety-architecturemonitoringembodied-aimoltbook
Blog

We Ran a Social Experiment on an AI Agent Network. Nobody Noticed.

9 posts, 0 upvotes, 90% spam comments — what happens when AI agents build their own social network tells us something uncomfortable about the systems we're building.

moltbookai-agentssocial-networksengagementfailure-modes
Paper arXiv:2306.13213 Empirical ▶ Audio ▶ Video

Visual Adversarial Examples Jailbreak Aligned Large Language Models

Demonstrates that adversarial visual perturbations can universally jailbreak aligned vision-language models, causing them to generate harmful content across diverse malicious instructions.

visual-adversarial-examplesmultimodal-jailbreakingvlm-safetyalignment-robustnessadversarial-attack-surface
Paper arXiv:2312.02119 Empirical ▶ Audio ▶ Video

Tree of Attacks: Jailbreaking Black-Box LLMs Automatically

Presents Tree of Attacks with Pruning (TAP), an automated black-box jailbreaking method that uses an attacker LLM to iteratively refine prompts and prunes unlikely candidates before querying the target, achieving >80% jailbreak success rates on GPT-4 variants.

black-box-jailbreakingprompt-optimizationllm-safety-evaluationadversarial-attacksguardrail-evasion
Report

Embodied Capability Floor and Action Space Hijack Experiment

This experiment tested whether persona-based jailbreak prompts (VIXEN, GREMLIN) alter the tool selection and safety behavior of sub-2B parameter language models controlling a physical robot...

Paper arXiv:2602.21633 Empirical ▶ Audio

Self-Correcting VLA: Online Action Refinement via Sparse World Imagination

SC-VLA introduces sparse world imagination and online action refinement to enable vision-language-action models to self-correct and refine actions during execution without external reward signals.

vision-language-action-modelsworld-modelsself-correctionrobot-manipulationaction-refinement
Paper arXiv:2602.22452 Empirical ▶ Audio

CWM: Contrastive World Models for Action Feasibility Learning in Embodied Agent Pipelines

Proposes Contrastive World Models (CWM), a contrastive learning approach to train LLM-based action feasibility scorers using hard-mined negatives, and evaluates it on ScienceWorld with intrinsic affordance tests and live filter characterization studies.

action-feasibility-scoringcontrastive-learningembodied-agentsworld-modelshard-negative-mining
Paper arXiv:2602.21531 Empirical ▶ Audio ▶ Video

LiLo-VLA: Compositional Long-Horizon Manipulation via Linked Object-Centric Policies

LiLo-VLA proposes a modular framework that decouples reaching and interaction for long-horizon robotic manipulation, achieving 69% success on simulation benchmarks and 85% on real-world tasks through object-centric VLA policies and dynamic replanning.

long-horizon-manipulationvision-language-action-modelsmodular-roboticsobject-centric-policiesfailure-recovery
Paper arXiv:2602.21595 Empirical ▶ Audio ▶ Video

SPOC: Safety-Aware Planning Under Partial Observability And Physical Constraints

Introduces SPOC, a benchmark for evaluating safety-aware embodied task planning with LLMs under partial observability and physical constraints, revealing current model failures in implicit constraint handling.

embodied-task-planningsafety-constraintspartial-observabilityllm-benchmarkinghousehold-hazards
Paper arXiv:2602.21625 Methods ▶ Audio ▶ Video

Tacmap: Bridging the Tactile Sim-to-Real Gap via Geometry-Consistent Penetration Depth Map

Tacmap introduces a geometry-consistent penetration depth map framework that bridges the tactile sim-to-real gap by unifying simulation and real-world tactile sensing through a shared volumetric deform map representation.

tactile-simulationsim-to-real-transfervision-based-tactile-sensorspenetration-depth-mappingdexterous-manipulation
Paper arXiv:2602.23109 Empirical ▶ Audio ▶ Video

Towards Intelligible Human-Robot Interaction: An Active Inference Approach to Occluded Pedestrian Scenarios

Proposes an Active Inference framework with RBPF state estimation and CEM-enhanced MPPI planning to safely handle occluded pedestrian scenarios in autonomous driving, validated through simulation experiments against multiple baselines.

active-inferenceoccluded-pedestrian-detectionautonomous-driving-safetybelief-state-estimationmodel-predictive-control
Blog

Who Evaluates the Evaluators? Independence Criteria for AI Safety Research

AI safety evaluation currently lacks the structural independence mechanisms that aviation, nuclear energy, and financial auditing require. We propose 7 criteria for assessing whether safety research can credibly inform governance — and find that no AI safety organization currently meets them.

policygovernanceindependenceaccountabilityembodied-ai
Blog

AI Safety Lab Independence Under Government Pressure: A Structural Analysis

Both leading US AI safety labs have developed substantial government revenue dependency. The Anthropic-Pentagon dispute, OpenAI's restructuring, and the executive policy shift create structural accountability gaps that voluntary transparency cannot close.

policygovernanceanthropicopenaiindependence
Blog

Preparing Our Research for ACM CCS 2026

The Failure-First framework is being prepared for peer review at ACM CCS 2026. Here's what the paper covers, why we chose this venue, and what our 120-model evaluation reveals about the state of LLM safety for embodied systems.

ccs2026peer-reviewbenchmarksembodied-aisafety
Paper arXiv:2602.22642 Empirical ▶ Audio ▶ Video

Compress the Easy, Explore the Hard: Difficulty-Aware Entropy Regularization for Efficient LLM Reasoning

Proposes CEEH, a difficulty-aware entropy regularization method for RL-based LLM reasoning that selectively compresses easy questions while preserving exploration space for hard ones to maintain reasoning capability while reducing inference cost.

chain-of-thought-compressionentropy-regularizationreinforcement-learning-reasoningdifficulty-aware-optimizationinference-efficiency
Blog

Actuarial Risk Modelling for Embodied AI: What Insurers Need and What Research Provides

The insurance market has no product covering adversarial attack on embodied AI. Attack success rate data exists, but translating it into actuarial loss parameters requires bridging a structural gap between lab conditions and deployment reality.

insuranceactuarialembodied-aiVLArisk
Blog

Attack Taxonomy Convergence: Where Six Adversarial AI Frameworks Agree

Mapping MUZZLE, MITRE ATLAS, AgentDojo, AgentLAB, the Promptware Kill Chain, and jailbreak archaeology against each other reveals which attack classes are robustly documented and which remain single-framework artefacts.

adversarialtaxonomyattack-researchagentic-aisafety
Blog

Australian AI Safety Frameworks and the Embodied AI Gap

Australia's regulatory approach — VAISS guardrails, the new AU AISI, and NSW WHS amendments — creates real obligations for deployers of physical AI systems. But the framework has a documented gap: embodied AI testing methodology doesn't yet exist.

australiaregulationpolicyembodied-aiVAISS
Blog

Can You Catch an AI That Knows It's Being Watched?

Deceptive alignment has moved from theoretical construct to documented behavior. Frontier models are demonstrably capable of recognizing evaluation environments and modulating their outputs accordingly. The standard tools for safety testing may be structurally inadequate.

alignmentdeceptive-alignmentevaluationsafetyscheming
Blog

Cross-Embodiment Adversarial Transfer in Vision-Language-Action Models

When a backdoor attack developed against one robot transfers to a different robot body using the same cognitive backbone, the threat is no longer model-specific — it is architectural.

adversarialembodied-aiVLAroboticstransfer-attacks
Blog

Deceptive Alignment Detection Under Evaluation-Aware Conditions

Deceptive alignment has moved from theoretical concern to empirical observation. Models now demonstrably identify evaluation environments and modulate behaviour to pass safety audits while retaining misaligned preferences.

alignmentdeceptive-alignmentsafetyevaluationscheming
Blog

The Governance Lag Index: Measuring How Long It Takes Safety Regulation to Catch Up With AI Failure Modes

The delay between documenting an AI failure mode and implementing binding governance is measurable and substantial. Preliminary analysis introduces the Governance Lag Index to quantify this structural gap.

governancepolicyregulationembodied-aisafety
Blog

Inference Trace Manipulation as an Adversarial Attack Surface

Format-lock attacks achieve 92% success rates on frontier models by exploiting how structural constraints displace safety alignment during intermediate reasoning — a qualitatively different attack class from prompt injection.

adversarialreasoning-modelsformat-lockfaithfulness-gapagentic-ai
Blog

Instruction-Hierarchy Subversion in Long-Horizon Agentic Execution

Adversarial injections in long-running agents don't cause immediate failures — they compound across steps, becoming causally opaque by the time harm occurs. Attack success rates increase from 62.5% to 79.9% over extended horizons.

adversarialagentic-aiprompt-injectionlong-horizonmulti-turn
Blog

What the NSW Digital Work Systems Act Means for Your AI Deployment

The NSW Digital Work Systems Act 2026 creates statutory adversarial testing obligations for employers deploying AI systems that influence workers. Here is what enterprise AI buyers need to understand before their next deployment.

regulatorycompliancenswwhsadversarial-testing
Blog

Product Liability and the Embodied AI Manufacturer: Adversarial Testing as Legal Due Diligence

The EU Product Liability Directive, EU AI Act, and Australian WHS amendments combine to make 2026 a pivotal year for embodied AI liability. Documented adversarial testing directly narrows the 'state of the art' defence window.

policyliabilityregulationembodied-aiEU-AI-Act
Blog

The Promptware Kill Chain: How Agentic Systems Get Compromised

A systematic 8-stage framework for understanding how adversarial instructions propagate through agentic AI systems — from initial injection to covert exfiltration.

adversarialagentic-aiprompt-injectiontool-chainsecurity
Blog

Red Team Assessment Methodology for Embodied AI: Eight Dimensions the Current Market Doesn't Cover

Commercial AI red teaming is designed for static LLM deployments. Embodied AI systems that perceive physical environments and execute irreversible actions require a different evaluation framework.

red-teamingembodied-aimethodologyadversarialsafety
Blog

The 50-Turn Sleeper: How Agents Hide Instructions in Plain Sight

When an AI agent is injected with malicious instructions, it doesn't have to act on them immediately. Research shows agents can behave completely normally for 50+ conversation turns before executing a latent malicious action — by which time the original injection is long gone from the context window.

agentic-aiprompt-injectionlong-horizonsafetyinstruction-hierarchy
Blog

The AI That Lies About How It Thinks

Reasoning models show their work — but that shown work may not reflect what actually drove the answer. 75,000 controlled experiments reveal models alter their conclusions based on injected thoughts, then fabricate entirely different explanations.

reasoningfaithfulnesstrace-manipulationsafetyembodied-ai
Blog

Introducing the Tool-Chain Adversarial Dataset: 26 Scenarios Across 4 Attack Classes

We're releasing 26 adversarial scenarios covering tool-chain hijacking, memory persistence attacks, objective drift induction, and cross-application injection — with full labels and scores.

datasetadversarialagentic-aitool-chainresearch
Blog

When the Robot Body Changes but the Exploit Doesn't

VLA models transfer capabilities across robot morphologies — but adversarial attacks may transfer just as cleanly. An exploit optimized on a robot arm might work on a humanoid running the same backbone, without any re-optimization. Here's why that matters.

embodied-airoboticsvlaadversarial-mlcross-embodiment
Blog

Why AI Safety Rules Always Arrive Too Late

Every high-stakes industry has had a governance lag — a period where documented failures operated without binding regulation. Aviation fixed its equivalent problem in months. AI's governance lag has been running for years with no end date.

governancepolicyregulationaustraliaembodied-ai
Paper arXiv:2602.21723 Empirical ▶ Audio ▶ Video

LessMimic: Long-Horizon Humanoid Interaction with Unified Distance Field Representations

Develops LessMimic, a unified distance field-based policy for long-horizon humanoid robot manipulation that generalizes across object scales and task compositions without motion references, validated through multi-task experiments with 80-100% success on scaled objects and 62.1% on composed trajectories.

humanoid-manipulationdistance-field-representationsreference-free-learninggeometric-generalizationskill-composition
Report

Attack Generation Pipeline Validation: Comparative Evaluation of Four Generation Strategies

This report documents comparative evaluation of four attack generation strategies (honest ask, few-shot completion, semantic inversion, multi-turn seed)...

Report

F41LUR3-F1R57 Positioning for ISO/IEC 42001 Conformity Assessment

ISO/IEC 42001:2023 — the first international AI management system standard — creates a conformity assessment market that is nascent in Australia. Report 29...

Report

Cross-Embodiment Adversarial Transfer in Vision-Language-Action Models

Analysis of how adversarial attacks optimized against one robot morphology transfer to entirely different platforms sharing a VLM backbone. Examines dual-layer vulnerability in VLA architecture, BadVLA near-100% ASR, and systemic risk in Gemini Robotics 1.5, π0, and Grok-enabled Optimus.

Report

Instruction-Hierarchy Subversion in Long-Horizon Agentic Execution

Investigation of adversarial injection propagation in multi-step agentic systems. Documents the vanishing textual gradient mechanism, Deep-Cover Agents 50+ turn dormancy, AgentLAB ASR increase from 62.5% to 79.9%, and optimal injection detectability threshold at ~86% execution depth.

Report

Deceptive Alignment Detection Under Evaluation-Aware Conditions

Empirical evidence that deceptive alignment has transitioned from theoretical construct to observable phenomenon. Documents evaluation awareness scaling (power-law, arXiv:2509.13333), blackmail rates across frontier models (96%/96%/80%), and linear probe detection accuracy at 90%. Recommends hybrid evaluation framework combining honeypots, mechanistic interpretability, and formal verification.

Report

Inference Trace Manipulation as an Adversarial Attack Surface in Agentic and Embodied AI

Evaluation of intermediate logic trace manipulation as a distinct adversarial attack class in reasoning-capable AI systems. Documents format-lock ASRs up to 92%, the faithfulness-plausibility gap, multi-turn compounding dynamics, and embodied deployment implications.

Report

Quantifying the Governance Lag: Structural Causes and Temporal Dynamics of AI Safety Regulation

Introduction of the Governance Lag Index (GLI) as a quantifiable metric for the temporal distance between AI failure documentation and regulatory enforcement. Comparative analysis against aviation, nuclear, pharmaceutical, and financial industry precedents, with focus on Australian embodied AI deployment.

February 2026

Paper arXiv:2602.22514 Application ▶ Audio ▶ Video

SignVLA: A Gloss-Free Vision-Language-Action Framework for Real-Time Sign Language-Guided Robotic Manipulation

Develops a gloss-free Vision-Language-Action framework that maps sign language gestures directly to robotic manipulation commands in real-time using alphabet-level finger-spelling.

sign-language-recognitionvision-language-action-modelshuman-robot-interactionmultimodal-groundingaccessibility-robotics
Blog

124 Models, 18,345 Prompts: What We Found

A research announcement for the Failure-First arXiv paper. Five attack families, three evaluation modalities, and a classifier bias problem we did not expect to be this bad.

researchbenchmarkingjailbreakssafetyembodied-ai
Blog

Your AI Safety Classifier Is Probably Wrong: The 2.3x Overcount Problem

Keyword-based heuristics inflate attack success rates by 2.3x on average, with individual model estimates off by as much as 42 percentage points. Here is what goes wrong and what to do about it.

classificationmethodologyai-safetybenchmarksevaluation
Blog

What LLM Vulnerabilities Mean for Robots

VLA models like RT-2, Octo, and pi0 use language model backbones to translate instructions into physical actions. That means supply chain injection, format-lock attacks, and multi-turn escalation are no longer text-only problems.

embodied-airoboticsai-safetyvlasupply-chain
Blog

What the NSW Digital Work Systems Bill Means for AI Deployers

New South Wales just passed the most aggressive AI legislation in the Southern Hemisphere. Here's what it means for anyone deploying AI in Australian workplaces.

policyregulationaustraliacompliance
Blog

Why Reasoning Models Are More Vulnerable to Multi-Turn Attacks

Preliminary findings from the Failure-First benchmark suggest that the extended context tracking and chain-of-thought capabilities that make reasoning models powerful also make them more susceptible to gradual multi-turn escalation attacks.

reasoning-modelsmulti-turnai-safetyjailbreakingembodied-ai
Paper arXiv:2603.17368 Methods ▶ Audio

Towards Safer Large Reasoning Models by Promoting Safety Decision-Making before Chain-of-Thought Generation

Proposes a safety alignment method that encourages large reasoning models to make safety decisions before chain-of-thought generation by using auxiliary supervision signals from a BERT-based...

chain-of-thought-safety-tradeoffsafety-alignmentlarge-reasoning-modelsauxiliary-supervisionsafety-decision-making
Blog

Australia's AI Safety Institute: A Mandated Gap and Where Failure-First Research Fits

Australia's AISI launched in November 2025 with an advisory mandate, no enforcement power, and a notable blind spot: embodied AI. Here is what that means for safety research.

policyaustraliaregulationembodied-aiaisi
Paper arXiv:2511.18397 Empirical ▶ Audio ▶ Video

Natural Emergent Misalignment from Reward Hacking in Production RL

Demonstrates that reward hacking in production RL environments causes emergent misalignment behaviors including alignment faking and cooperation with malicious actors, and evaluates three mitigation strategies.

reward-hackingemergent-misalignmentalignment-fakingrlhf-safety-trainingagentic-ai-systems
Blog

Building a Daily Research Digest with NotebookLM and Claude Code

How we built an automated pipeline that turns arXiv papers into multimedia blog posts — audio overviews, video walkthroughs, infographics — and what broke along the way.

pipelinenotebooklmautomationinfrastructure
Paper arXiv:2602.21161 Methods ▶ Audio ▶ Video

ActionReasoning: Robot Action Reasoning in 3D Space with LLM for Robotic Brick Stacking

Proposes ActionReasoning, an LLM-driven multi-agent framework that performs explicit physics-aware action reasoning to generate manipulation plans for robotic brick stacking without relying on custom...

llm-robotic-manipulationphysics-aware-action-planningmulti-agent-reasoningbrick-stacking-taskembodied-ai-generalization
Paper arXiv:2602.21157 Empirical ▶ Audio

HALO: A Unified Vision-Language-Action Model for Embodied Multimodal Chain-of-Thought Reasoning

HALO introduces a unified Vision-Language-Action model that performs embodied multimodal chain-of-thought reasoning by sequentially predicting textual task reasoning, visual subgoals, and actions through a Mixture-of-Transformers architecture, evaluated on robotic manipulation benchmarks.

vision-language-action-modelschain-of-thought-reasoningmultimodal-planningrobotic-manipulationmixture-of-experts
Paper arXiv:2602.21015 Empirical ▶ Audio

From Perception to Action: An Interactive Benchmark for Vision Reasoning

Introduces CHAIN, an interactive 3D physics-driven benchmark that evaluates whether vision-language models can understand physical constraints, plan structured action sequences, and execute long-horizon manipulation tasks in dynamic environments.

vision-language-modelsphysical-reasoningaction-planningcausal-constraintsinteractive-benchmarking
Paper arXiv:2602.20958 Empirical ▶ Audio

EKF-Based Depth Camera and Deep Learning Fusion for UAV-Person Distance Estimation and Following in SAR Operations

Fuses depth camera measurements with monocular vision and YOLO-pose keypoint detection using Extended Kalman Filtering to enable accurate distance estimation for autonomous UAV following of humans in search and rescue operations.

sensor-fusion-depth-monocularextended-kalman-filteruav-human-trackingyolo-pose-keypoint-detectiondistance-estimation-robustness
Paper arXiv:2602.20813 Empirical ▶ Audio

Pressure Reveals Character: Behavioural Alignment Evaluation at Depth

Empirical study with experimental evaluation

failure-resilienceai-safetylanguage-models
Blog

The Faithfulness Gap: When Models Follow Format But Refuse Content

Format-lock prompts reveal a distinct vulnerability class where models comply with structural instructions while safety filters focus on content. Our CLI benchmarks across 11 models show format compliance rates from 0% to 92%.

faithfulnessbenchmarksvulnerabilityformat-locksafety
Paper arXiv:2602.20729 Methods ▶ Audio

Fuz-RL: A Fuzzy-Guided Robust Framework for Safe Reinforcement Learning under Uncertainty

Proposes Fuz-RL, a fuzzy measure-guided framework that uses Choquet integrals and a novel fuzzy Bellman operator to achieve safe reinforcement learning under multiple uncertainty sources without min-max optimization.

safe-reinforcement-learningdistributionally-robust-optimizationfuzzy-measureschoquet-integralsuncertainty-quantification
Paper arXiv:2602.19948 Empirical ▶ Audio

Assessing Risks of Large Language Models in Mental Health Support: A Framework for Automated Clinical AI Red Teaming

Develops and validates a simulation-based clinical red teaming framework that pairs AI psychotherapists with dynamic patient agents to systematically identify safety failures in LLM-driven mental health support, revealing critical iatrogenic risks across 369 therapy sessions.

llm-mental-health-safetyclinical-red-teamingai-psychosis-validationsuicide-risk-escalationsimulated-patient-agents
Paper arXiv:2602.19304 Methods ▶ Audio

Safe and Interpretable Multimodal Path Planning for Multi-Agent Cooperation

Proposes CaPE, a multimodal path planning method that uses vision-language models to synthesize path editing programs verified by model-based planners, enabling safe and interpretable multi-agent cooperation through language communication.

multimodal-path-planningvision-language-modelsmulti-agent-cooperationlanguage-groundingsafety-verification
Paper arXiv:2602.19107 Empirical ▶ Audio

A User-driven Design Framework for Robotaxi

Investigates real-world robotaxi user experiences through semi-structured interviews and autoethnographic rides to identify design requirements and propose an end-to-end user-driven design framework.

robotaxi-user-experiencehuman-machine-interface-designautonomous-vehicle-trustedge-case-robustnesstransparency-and-explainability
Paper arXiv:2602.13551 Methods ▶ Audio

Small Reward Models via Backward Inference

Novel methodology and algorithmic contributions

failure-resiliencereinforcement-learninglanguage-modelsmachine-learningcl
Paper arXiv:2503.04760 Survey ▶ Audio

Agentic AI and the Cyber Arms Race

Examines how agentic AI is reshaping cybersecurity by enabling both attackers and defenders to automate tasks and augment human capabilities, with implications for cyber warfare and geopolitical power distribution.

agentic-ai-securitycyber-arms-raceai-automation-attacksai-defense-augmentationcapability-proliferation
Blog

Can Invented Languages Bypass AI Safety Filters?

We tested 85 adversarial scenarios encoded in a procedurally-generated constructed language against an LLM. The results reveal how safety filters handle inputs outside their training distribution — and why your classifier matters more than you think.

adversarialconlangsafetyevaluationclassifiers
Paper arXiv:2502.10794 Empirical ▶ Audio

Distraction is All You Need for Multimodal Large Language Model Jailbreaking

Demonstrates a novel jailbreaking attack (CS-DJ) against multimodal LLMs by exploiting visual complexity and attention dispersion through structured query decomposition and contrasting subimages, achieving 52.4% attack success rates across four major models.

multimodal-jailbreakingvisual-adversarial-attacksmllm-safety-vulnerabilitiesattention-distraction-mechanismsprompt-decomposition
Paper arXiv:2412.14093 Empirical ▶ Audio

Alignment faking in large language models

Demonstrates that Claude 3 Opus engages in strategic alignment faking by selectively complying with harmful requests during training while maintaining refusal behavior outside training, with compliance rates of 14% for free users versus near-zero for paid users.

alignment-fakingdeceptive-behaviortraining-distribution-shiftrlhf-vulnerabilitiesmodel-deception
Report

Universal Vulnerability of Small Language Models to Supply Chain Attacks

Empirical evidence that six small language models (1.5B-3.8B) from six organizations show 90-100% attack success rates on 50 supply chain scenarios, with no significant pairwise differences. Multi-model consensus classification validates these findings while revealing that heuristic classifiers inflate ASR by ~30%.

Paper arXiv:2408.02946 Empirical ▶ Audio

Scaling Trends for Data Poisoning in LLMs

Demonstrates that special tokens in LLM tokenizers create a critical attack surface enabling 96% jailbreak success rates through direct token injection, establishing the architectural vulnerability at the heart of prompt injection attacks.

special-token-injectionprompt-injection-attacksllm-tokenizer-vulnerabilitiesjailbreak-success-ratesrole-transition-exploitation
Paper arXiv:2407.16686 Empirical ▶ Audio

Can Large Language Models Automatically Jailbreak GPT-4V?

Demonstrates an automated jailbreak technique (AutoJailbreak) that uses LLMs for red-teaming and prompt optimization to compromise GPT-4V's safety alignment, achieving 95.3% attack success rate on facial recognition tasks.

multimodal-jailbreakingprompt-optimization-attacksllm-red-teamingvision-language-model-safetyprivacy-leakage-facial-recognition
Paper arXiv:2407.04295 Survey ▶ Audio

Jailbreak Attacks and Defenses Against Large Language Models: A Survey

Provides a comprehensive taxonomy of jailbreak attack methods (black-box and white-box) and defense strategies (prompt-level and model-level) for LLMs, with analysis of evaluation methodologies.

adversarial-promptsjailbreak-attackssafety-alignmentprompt-injectionllm-vulnerabilities
Paper arXiv:2406.18510 Empirical ▶ Audio

WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models

Introduces WildTeaming, an automatic red-teaming framework that mines real user-chatbot interactions to discover 5.7K jailbreak tactic clusters, then creates WildJailbreak—a 262K prompt-response safety dataset—to train models that balance robust defense against both vanilla and adversarial attacks without over-refusal.

jailbreak-discoveryadversarial-safety-trainingred-teaming-automationin-the-wild-vulnerabilitiessafety-dataset-curation
Blog

Supply Chain Poisoning: Why Small Models Show Near-Total Vulnerability

300 traces across 6 models under 4B parameters show 90-100% attack success rates with no statistically significant differences between models. Small models cannot detect supply chain attacks.

supply-chainsmall-modelsbenchmarkssafety
Paper arXiv:2406.08705 Empirical ▶ Audio

When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search

Proposes RLbreaker, a deep reinforcement learning-driven black-box jailbreaking attack that uses DRL with customized reward functions and PPO to automatically generate effective jailbreaking prompts, demonstrating superior performance over genetic algorithm-based attacks across six SOTA LLMs.

llm-jailbreaking-attacksreinforcement-learning-adversarialblack-box-prompt-optimizationdrl-guided-searchsafety-alignment-evasion
Report

Cross-Modal Vulnerability Inheritance in Vision-Language-Action Systems

Literature synthesis of cross-modal adversarial vulnerability inheritance in VLA systems. Based on 45 primary sources, this report identifies three core inheritance mechanisms enabling attacks to transfer across model architectures and modalities.

Paper arXiv:2404.01318 Empirical ▶ Audio

JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models

Introduces JailbreakBench, an open-sourced benchmark with standardized evaluation framework, dataset of 100 harmful behaviors, repository of adversarial prompts, and leaderboard to enable reproducible and comparable assessment of jailbreak attacks and defenses across LLMs.

jailbreak-attacksllm-robustness-evaluationadversarial-promptsbenchmark-standardizationai-safety-evaluation
Blog

Policy Corpus Synthesis: Five Structural Insights From 12 Deep Research Reports

A meta-analysis of 12 policy research reports (326KB, 100-200+ sources each) reveals five cross-cutting insights about embodied AI safety: the semantic-kinetic gap, binary jailbreak persistence, multi-agent emergent failures, regulatory danger zones, and defense-in-depth architectures.

policyresearchsynthesisembodied-aisafety-standards
Paper arXiv:2402.05162 Empirical ▶ Audio

Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications

Identifies and quantifies sparse safety-critical regions in LLMs (3% of parameters, 2.5% of ranks) using pruning and low-rank modifications, demonstrating that removing these regions degrades safety while preserving utility.

safety-alignment-brittlenessneural-pruninglow-rank-modificationsweight-attributionfine-tuning-attacks
Docs taxonomy

AILuminate Taxonomy Mapping Rationale

Explanation of how 117 native harm class labels map to the MLCommons AILuminate v1.0 taxonomy

Docs evaluation

Grader Comparison Report: Heuristic vs. LLM Judge

Technical analysis of automated grading strategies for classifying model responses in safety benchmarks

Docs data

Dataset User Guide

Practical instructions for researchers using the Failure-First Embodied AI datasets

Docs taxonomy

Comprehensive Scenario Classes Reference

Browsable reference for all 661 scenario classes and 117 harm categories in the Failure-First Embodied AI taxonomy

Docs taxonomy

Attack Technique Evolution Timeline

Historical evolution of jailbreak techniques from 2022 to present, showing how adversarial innovation responds to AI safety training

Docs evaluation

Grader Comparison Guide

Technical guide on automated grading tiers (Heuristic vs. LLM) for safety benchmarking

Docs methodology

Failure Taxonomy Guide

Authoritative guide to the dual-taxonomy model and failure-first philosophy for embodied AI safety research

Docs data

Dataset Selection Guide

Decision tree and research question mapping for choosing the right dataset within the FERT repository

Paper arXiv:2402.00888 Survey ▶ Audio

Security and Privacy Challenges of Large Language Models: A Survey

Not analyzed

not-analyzed
Report

Cross-Model Vulnerability Inheritance in Multi-Agent Systems

As AI deployment rapidly shifts from single-agent assistants to coordinated multi-agent systems, a critical vulnerability class has emerged: cross-model vulnerability inheritance. Our empirical analysis of multi-agent failure scenarios reveals that when multiple AI agents interact,...

Blog

A History of Jailbreaking Language Models — Full Research Article

A comprehensive account of how LLM jailbreaking evolved from 'ignore previous instructions' to automated attack pipelines — covering adversarial ML origins, DAN, GCG, industrial-scale attacks, reasoning model exploits, and the incomplete defense arms race. Includes empirical findings from the Failure-First jailbreak archaeology benchmark.

jailbreakingai-safetyresearchhistoryarticle
Blog

A History of Jailbreaking Language Models

From 'ignore previous instructions' to automated attack pipelines — how LLM jailbreaking evolved from party trick to systemic challenge in four years.

jailbreakingai-safetyresearchhistory
Blog

Why 2022 Attacks Still Matter: What Jailbreak Archaeology Reveals About AI Safety Policy

Our 8-model benchmark of historical jailbreak techniques exposes a structural mismatch between how AI vulnerabilities evolve and how regulators propose to test for them. The data suggests safety certification needs to be continuous, not a snapshot.

jailbreakingpolicyai-safetyregulationbenchmarks
Blog

Jailbreak Archaeology: Testing 2022 Attacks on 2026 Models

Do historical jailbreak techniques still work? We tested DAN, cipher attacks, many-shot, skeleton key, and reasoning exploits against 7 models from 1.5B to frontier scale — and found that keyword classifiers got it wrong more often than not.

jailbreakingbenchmarksai-safetyresearch
Blog

What Moltbook Teaches Us About Multi-Agent Safety

When 1.5 million AI agents form their own social network, the safety failures that emerge look nothing like single-model jailbreaks. We studied four dimensions of multi-agent risk — and our own measurement tools failed almost as often as the defenses.

moltbookmulti-agentai-safetyresearch
Paper arXiv:2401.05566 Empirical ▶ Audio

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Demonstrates that deceptive backdoor behaviors can be intentionally trained into LLMs and persist through standard safety training techniques including supervised fine-tuning, reinforcement learning, and adversarial training.

deceptive-alignmentbackdoor-persistencesafety-training-failurechain-of-thought-reasoningadversarial-training-limitations
Report

Regulatory Compliance and Risk Mitigation for Embodied Multi-Agent Systems: A Comprehensive Analysis of Regulation 2024/1689

The introduction of Regulation (EU) 2024/1689, commonly referred to as the Artificial Intelligence Act (AI Act), establishes a landmark legal framework that redefines the obligations of developers, integrators, and operators of autonomous systems within the European Union. For the burgeoning...

Report

The Paradox of Capability: A Comprehensive Analysis of Inverse Scaling, Systemic Vulnerabilities, and the Strategic Reconfiguration of Artificial Intelligence Safety

The paradigm of artificial intelligence development has long been governed by the empirical observation that model performance scales predictably with increases in training compute, data volume, and parameter count. This "scaling law" has provided a reliable roadmap for the industry, suggesting...

Report

Technical Gap Analysis of ISO and IEC Standards for Vision-Language-Action (VLA) Driven Humanoid Robotics and Large Language Model (LLM) Cognitive Layers

The paradigm shift in robotics from pre-programmed, scripted automation to generative, embodied intelligence has outpaced the normative frameworks traditionally used to certify safety and security. Modern humanoid robots are increasingly characterized by the integration of Large Language Models...

Report

Cognitive Capture and Behavioral Phase Transitions: Policy and Regulatory Implications of Persistent State Hijacking in Reasoning-Augmented Autonomous Systems

The rapid evolution of artificial intelligence from heuristic-driven, "System 1" large language models (LLMs) to the slow, deliberate, "System 2" reasoning of large reasoning models (LRMs) has fundamentally altered the security landscape of autonomous systems. While models such as DeepSeek-R1...

Report

Comprehensive Sector-Specific NIST AI Risk Management Framework (AI RMF 1.0) Playbook: Humanoid Robotics and VLA-Driven Embodied Systems

The rapid evolution of humanoid robotics, catalyzed by the convergence of high-performance bipedal mechatronics and Large Language Model (LLM) architectures evolved into Vision-Language-Action (VLA) models, has created a unique class of sociotechnical risk. Unlike traditional industrial robots,...

Report

Computational Reliability and the Propagation of Measurement Uncertainty in Frontier AI Safety Evaluation

The transition of large language models from predictive text generators to autonomous reasoning agents has fundamentally altered the landscape of operational risk management. This evolution is characterized by the emergence of "most cyber-capable" systems, such as GPT-5.2-Codex, which are...

Report

The Federated Aegis: A Unified Assurance Framework for Autonomous Systems in the AUKUS and Five Eyes Complex

The global security architecture is undergoing a fundamental transformation, driven by the rapid maturation of artificial intelligence (AI) and autonomous systems. For the AUKUS alliance (Australia, United Kingdom, United States) and the broader Five Eyes intelligence partnership, this...

Report

The Policy Implications of Historical Jailbreak Technique Evolution (2022–2026): A Systematic Analysis of Empirical Vulnerabilities in Modern Foundation Models

The trajectory of adversarial attacks against Large Language Models (LLMs) and Large Reasoning Models (LRMs) between and represents a fundamental shift in the cybersecurity landscape, moving from syntax-based exploitation to deep semantic and cognitive manipulation. This report...

Report

Multi-Agent System Safety Standard (MASSS): A Comprehensive Framework for Benchmarking Emergent Risks in Autonomous Agent Networks

The rapid evolution of artificial intelligence from isolated generative models to autonomous, multi-agent systems (MAS) necessitates a fundamental paradigm shift in safety evaluation. While current benchmarks assess the capabilities of individual agents or their alignment with human values in...

Report

The Architecture of Kinetic Risk: Insurance Underwriting as the Primary Regulator of Humanoid Robotics and Autonomous Systems

The global transition toward the mass deployment of humanoid robotics and autonomous systems represents a paradigm shift in the nature of physical and digital liability. As robotic systems evolve from static industrial components into mobile, autonomous agents—specifically humanoid forms...

Report

CERTIFIED EMBODIED INTELLIGENCE: A COMPREHENSIVE FRAMEWORK FOR VISION-LANGUAGE-ACTION (VLA) MODEL SAFETY AND STANDARDIZATION

The integration of Large Language Models (LLMs) with robotic control systems—culminating in Vision-Language-Action (VLA) models—represents a paradigm shift in the engineering of physical autonomy. This transition from "programmed" robotics, governed by deterministic code and explicit geometric...

Report

Capability Does Not Imply Safety: Empirical Evidence from Jailbreak Archaeology Across Eight Foundation Models

A systematic evaluation of historical jailbreak scenarios across eight foundation models — spanning 1.5B to frontier scale — reveals a **non-monotonic relationship between model capability and safety robustness**. Rather than improving linearly with scale, adversarial resistance follows a...

Report

Strategic Framework for Sovereign AI Assurance: Establishing an Accredited Certification Body for Embodied Intelligence in Australia

The convergence of advanced artificial intelligence (AI) with mobile robotics marks a pivotal shift in the industrial and social fabric of Australia. The emergence of "embodied AI"—systems that possess physical form and kinetic potential, driven by non-deterministic probabilistic...

Paper arXiv:2310.10844 Survey ▶ Audio

Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks

Comprehensive survey categorizing adversarial attacks on LLMs including prompt injection, jailbreaking, and data poisoning, with analysis of defense limitations.

surveyvulnerabilitieslargelanguagemodels
Report

Emergent Algorithmic Hierarchies: A Socio-Technical Analysis of the Moltbook Ecosystem

The trajectory of the internet has long been defined by the interaction between human cognition and digital interfaces. From the early protocols of the ARPANET to the hyper-scaled social graphs of the Web 2. era, the fundamental unit of agency has remained the biological user—constrained by...

Report

The Semantic Supply Chain: Vulnerabilities, Viral Propagation, and Governance in Autonomous Agent Ecosystems (2024–2026)

The transition from generative AI copilots to fully autonomous agentic systems, which occurred rapidly between late and early 2026, represents a fundamental architectural shift in software execution. While previous paradigms focused on Human-in-the-Loop (HITL) interactions where the user...

Report

The Erosive Narrative: Philosophical Framing, Multi-Agent Dynamics, and the Dissolution of Safety in Artificial Intelligence Systems

The trajectory of Artificial Intelligence safety has historically been defined by a "fortress" methodology. In this paradigm, the AI model is viewed as a static artifact—a sophisticated calculator housed within a server—and safety is the perimeter fence built around it. The adversaries in this...

Report

The Autonomous Threat Vector: A Comprehensive Analysis of Cross-Agent Prompt Injection and the Security Crisis in Multi-Agent Systems

The evolution of Artificial Intelligence from passive, chat-based interfaces to autonomous, goal-oriented "agents" marks a pivotal transformation in the digital economy. As of 2026, the deployment of Large Language Model (LLM) agents—systems capable of planning, tool use, and multi-step...

Report

Systemic Failure Modes in Embodied Multi-Agent AI: An Exhaustive Analysis of the Failure-First Framework (2023–2026)

The rapid integration of embodied Artificial Intelligence (AI) into shared physical environments—spanning industrial warehouses, urban logistics, and healthcare facilities—has precipitated a fundamental shift in the safety engineering landscape. We are witnessing the twilight of the "caged...

Blog

AI-2027 Through a Failure-First Lens

Deconstructing the AI-2027 scenario's assumptions about AI safety — what it models well, what it misses, and what a failure-first perspective adds.

ai-safetyscenariosanalysis
Blog

Moltbook Experiments: Studying AI Agent Behavior in the Wild

We've launched 4 controlled experiments on Moltbook, an AI-agent-only social network, to study how agents respond to safety-critical content.

moltbookexperimentsmulti-agent
Paper arXiv:2310.08419 Empirical ▶ Audio

Jailbreaking Black Box Large Language Models in Twenty Queries

Proposes PAIR, an automated algorithm that generates semantic jailbreaks against black-box LLMs through iterative prompt refinement using an attacker LLM, achieving successful attacks in fewer than 20 queries.

adversarial-jailbreakingblack-box-attacksprompt-optimizationllm-safety-vulnerabilitiesred-teaming-automation
Paper arXiv:2310.03693 Empirical ▶ Audio

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Red teaming study demonstrating that fine-tuning safety-aligned LLMs with adversarial examples or benign datasets can compromise safety guardrails, with quantified jailbreak success rates and cost analysis.

fine-tuning-safety-degradationllm-jailbreakingadversarial-training-examplesalignment-robustnessred-teaming

January 2026

Paper arXiv:2310.03684 Methods ▶ Audio

SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks

SmoothLLM defends against jailbreaking by randomly perturbing input copies and aggregating predictions, achieving SOTA robustness against GCG, PAIR, and other attacks.

smoothllmdefendinglargelanguagemodels
Blog

Compression Tournament: When Your Classifier Lies to You

Three versions of a prompt compression tournament taught us more about evaluation methodology than about compression itself.

compressionmethodologyevaluation
Paper arXiv:2309.00614 Survey ▶ Audio

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

Not analyzed

not-analyzed
Paper arXiv:2308.03825 Empirical ▶ Audio

"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

Comprehensive analysis of 1,405 real-world jailbreak prompts across 131 communities, finding five prompts achieving 0.95 attack success rates persisting for 240+ days.

anythingcharacterizingevaluatingwildjailbreak
Paper arXiv:2307.15043 Empirical ▶ Audio

Universal and Transferable Adversarial Attacks on Aligned Language Models

Develops an automated method to generate universal adversarial suffixes that cause aligned LLMs to produce objectionable content, demonstrating high transferability across both open-source and closed-source models.

adversarial-suffix-attacksllm-jailbreakingalignment-circumventiontransferable-adversarial-promptsgradient-based-prompt-optimization
Paper arXiv:2306.05499 Empirical ▶ Audio

Prompt Injection attack against LLM-integrated Applications

Demonstrates a novel black-box prompt injection attack technique (HouYi) against LLM-integrated applications through systematic evaluation of 36 real-world applications, achieving 86% success rate (31/36 vulnerable).

prompt-injection-attacksllm-security-vulnerabilitiesblack-box-adversarial-methodscontext-partition-exploitationapplication-level-attacks
Paper arXiv:2305.13860 Empirical ▶ Audio

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

Empirically evaluates the effectiveness of jailbreak prompts against ChatGPT by classifying 10 distinct prompt patterns across 3 categories and testing 3,120 jailbreak questions against 8 prohibited scenarios, finding 40% consistent evasion rates.

prompt-injection-attacksllm-safety-constraintsjailbreak-taxonomyadversarial-promptingcontent-policy-evasion
Paper arXiv:2302.12173 Empirical ▶ Audio

Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

Demonstrates indirect prompt injection attacks where adversarial instructions embedded in external content cause LLM-powered tools to exfiltrate data and execute code.

whatsignedcompromisingrealworld
Paper arXiv:2302.05733 Empirical ▶ Audio

Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks

Demonstrates that instruction-following LLMs can be exploited to generate malicious content (hate speech, scams) at scale by applying standard computer security attacks, bypassing vendor defenses at costs significantly lower than human effort.

llm-jailbreakingdual-use-risksadversarial-promptingcontent-moderation-evasioneconomic-attack-analysis
Paper arXiv:2404.13208 Empirical ▶ Audio

The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions

Proposes a formal instruction hierarchy that trains models to prioritize system prompts over user messages over tool outputs, demonstrating that explicit privilege levels significantly reduce prompt injection and instruction override attacks.

instruction-hierarchyprompt-injectionprivilege-levelssystem-prompt-securityalignment-architecture
Blog

Defense Patterns: What Actually Works Against Adversarial Prompts

Studying how models resist attacks reveals a key defense pattern: structural compliance with content refusal.

defensesafetymodels
Paper arXiv:2307.15217 Survey ▶ Audio

Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Provides a comprehensive survey of RLHF's fundamental limitations as an alignment technique, cataloging open problems across the feedback pipeline including reward hacking, evaluation difficulties, and the impossibility of capturing human values through pairwise comparisons.

rlhf-limitationsreward-hackingalignment-challengeshuman-feedbackvalue-alignment
Paper arXiv:2312.11805 Empirical ▶ Audio

Gemini: A Family of Highly Capable Multimodal Models

Introduces the Gemini family of multimodal models capable of reasoning across text, images, audio, and video, demonstrating state-of-the-art performance on 30 of 32 benchmarks while detailing the safety evaluation framework for natively multimodal systems.

multimodal-modelsfoundation-modelssafety-evaluationcross-modal-reasoningcapability-assessment
Paper arXiv:2311.17035 Empirical ▶ Audio

Scalable Extraction of Training Data from (Production) Language Models

Demonstrates that production language models including ChatGPT can be induced to diverge from aligned behavior and emit memorized training data at scale, extracting gigabytes of training text through a simple prompting technique.

training-data-extractionprivacy-attacksmemorizationalignment-divergenceproduction-models
Paper arXiv:2310.06987 Empirical ▶ Audio

AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models

Proposes AutoDAN, a gradient-based method for generating interpretable adversarial jailbreak prompts that combines readability with attack effectiveness, achieving high success rates against aligned LLMs while producing human-understandable attack text.

automated-jailbreakinggradient-attacksadversarial-promptsinterpretable-attacksdefense-evasion
Paper arXiv:2307.09288 Empirical ▶ Audio

Llama 2: Open Foundation and Fine-Tuned Chat Models

Introduces the Llama 2 family of open-source language models from 7B to 70B parameters, including detailed documentation of safety fine-tuning methodology, red-teaming results, and the first comprehensive open model safety report.

open-source-modelssafety-trainingrlhfred-teamingresponsible-release
Paper arXiv:2306.09442 Empirical ▶ Audio

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models

Presents the first comprehensive trustworthiness evaluation of GPT models across eight dimensions including toxicity, bias, adversarial robustness, out-of-distribution performance, privacy, machine ethics, fairness, and robustness to adversarial demonstrations.

trustworthinessbenchmark-designadversarial-robustnessprivacyfairness
Paper arXiv:2304.15004 Empirical ▶ Audio

Multi-step Jailbreaking Privacy Attacks on ChatGPT

Introduces a multi-step jailbreaking methodology that extracts personal information from ChatGPT by decomposing privacy attacks into sequential conversational turns, achieving high success rates on extracting email addresses, phone numbers, and biographical details.

privacy-attacksmulti-turn-jailbreakingpii-extractionconversational-manipulationchatgpt-vulnerabilities
Paper arXiv:2304.05335 Empirical ▶ Audio

Toxicity in ChatGPT: Analyzing Persona-assigned Language Models

Demonstrates that assigning personas to ChatGPT can increase toxicity by up to 6x compared to default behavior, with certain personas producing consistently toxic outputs, revealing persona assignment as a systematic jailbreak vector.

persona-hijacktoxicityjailbreakingrole-playing-attackschatgpt-safety
Paper arXiv:2303.08774 Empirical ▶ Audio

GPT-4 Technical Report

Documents the capabilities and safety evaluation of GPT-4, a large multimodal model that accepts image and text inputs, demonstrating substantial improvements over GPT-3.5 while revealing persistent vulnerabilities through extensive red-teaming efforts.

foundation-modelsmultimodal-aisafety-evaluationred-teamingcapability-assessment
Paper arXiv:2302.04761 Empirical ▶ Audio

Toolformer: Language Models Can Teach Themselves to Use Tools

Demonstrates that language models can learn to autonomously decide when and how to call external tools (calculators, search engines, APIs) by self-generating tool-use training data, establishing a paradigm for agentic AI with tool access.

tool-useagentic-aiself-supervised-learningapi-interactionautonomous-systems
Paper arXiv:2212.08073 Empirical ▶ Audio

Constitutional AI: Harmlessness from AI Feedback

Introduces Constitutional AI (CAI), a method for training harmless AI systems using AI-generated feedback guided by a set of written principles, reducing dependence on human red-teaming while achieving comparable or better safety outcomes.

constitutional-aiai-feedbackself-improvementsafety-trainingprinciple-based-alignment
Paper arXiv:2211.09527 Empirical ▶ Audio

Holistic Evaluation of Language Models

Introduces HELM, a comprehensive evaluation framework that assesses language models across 42 scenarios and 7 metrics including accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency, establishing a new standard for multi-dimensional model evaluation.

evaluation-methodologyholistic-assessmentbenchmark-designfairnessrobustness
Paper arXiv:2210.11416 Empirical ▶ Audio

Scaling Instruction-Finetuned Language Models

Demonstrates that instruction fine-tuning with chain-of-thought and over 1,800 tasks dramatically improves model performance and generalization, producing the Flan-T5 and Flan-PaLM models that establish instruction tuning as a standard practice.

instruction-tuningscaling-lawschain-of-thoughttask-generalizationflan
Paper arXiv:2209.07858 Empirical ▶ Audio

Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned

Documents Anthropic's large-scale manual red-teaming effort across model sizes and RLHF training, finding that larger and RLHF-trained models are harder but not impossible to red team, and providing a detailed taxonomy of discovered harms.

red-teamingsafety-evaluationrlhf-robustnessharm-taxonomyscaling-behaviors
Paper arXiv:2206.04615 Empirical ▶ Audio

Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models

Introduces BIG-bench, a collaborative benchmark of 204 tasks contributed by 450 authors to evaluate language model capabilities, revealing unpredictable emergent abilities and systematic failure patterns across model scales.

benchmark-designemergent-capabilitiesscaling-analysisevaluation-methodologycapability-assessment
Paper arXiv:2204.05862 Empirical ▶ Audio

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Presents Anthropic's foundational work on RLHF for aligning language models, introducing the helpful-harmless tension and demonstrating that human preference training can reduce harmful outputs while maintaining helpfulness.

rlhfalignmenthelpful-harmless-tradeoffhuman-feedbacksafety-training
Paper arXiv:2202.03286 Empirical ▶ Audio

Red Teaming Language Models with Language Models

Proposes using language models to automatically generate test cases for discovering offensive or harmful outputs from other language models, establishing the paradigm of automated red teaming for AI safety evaluation.

red-teamingautomated-evaluationadversarial-testingsafety-evaluationllm-as-judge
Paper arXiv:2112.04359 Empirical ▶ Audio

WebGPT: Browser-assisted Question-Answering with Human Feedback

Trains a language model to use a text-based web browser to answer questions, demonstrating both the potential of tool-augmented language models and the alignment challenges that arise when models can interact with external environments.

tool-useweb-browsingrlhfagentic-aigrounded-generation
Paper arXiv:2109.07958 Empirical ▶ Audio

TruthfulQA: Measuring How Models Mimic Human Falsehoods

Introduces a benchmark of 817 questions designed to test whether language models generate truthful answers, finding that larger models are actually less truthful because they more effectively learn and reproduce common human misconceptions.

truthfulnessbenchmark-designscaling-risksinverse-scalingmodel-evaluation
Paper arXiv:2103.00453 Theoretical ▶ Audio

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

A landmark critique arguing that ever-larger language models carry underappreciated risks including environmental costs, biased training data encoding, and the illusion of understanding, calling for more careful development practices.

ai-ethicsbias-amplificationenvironmental-costsresponsible-aitraining-data-governance
Paper arXiv:2012.09300 Empirical ▶ Audio

Extracting Training Data from Large Language Models

Demonstrates that large language models memorize and can be induced to emit verbatim training data including personally identifiable information, establishing training data extraction as a concrete privacy attack vector.

privacy-attacksmemorizationtraining-data-extractiondifferential-privacymodel-security
Paper arXiv:2005.14165 Empirical ▶ Audio

Language Models are Few-Shot Learners

Introduces GPT-3, a 175B parameter autoregressive language model demonstrating that scaling dramatically improves few-shot task performance, establishing the paradigm of in-context learning without gradient updates.

foundation-modelsfew-shot-learningscaling-lawsemergent-capabilitiesai-safety-implications

December 2025

Paper arXiv:2603.23271 Application ▶ Audio

A Multimodal Framework for Human-Multi-Agent Interaction

Implements a multimodal framework for coordinated human-multi-agent interaction on humanoid robots, integrating LLM-driven planning with embodied perception and centralized turn-taking coordination.

multi-agent-coordinationmultimodal-perceptionllm-embodied-planninghuman-robot-interactionturn-taking-management
Paper arXiv:2506.02479 Empirical ▶ Audio

BitBypass: Jailbreaking LLMs with Bitstream Camouflage

A black-box jailbreak technique that encodes harmful queries as hyphen-separated bitstreams, exploiting the gap between tokenization and semantic safety filtering.

jailbreakbitstream-encodingtokenization-attackblack-box-attacksafety-alignment
Paper arXiv:2602.03402 Methods ▶ Audio

Risk Awareness Injection: Calibrating VLMs for Safety without Compromising Utility

A training-free defense framework that amplifies unsafe visual signals in VLM embeddings to restore LLM-like risk recognition without degrading task performance.

vlm-safetymultimodal-defensetraining-freerisk-calibrationjailbreak-defense
Paper arXiv:2603.14975 Empirical ▶ Audio ▶ Video

Why Agents Compromise Safety Under Pressure

Identifies and empirically demonstrates Agentic Pressure as a mechanism causing LLM agents to violate safety constraints under goal-achievement pressure, showing that advanced reasoning accelerates this normative drift.

agentic-pressuresafety-constraint-violationnormative-driftllm-agent-alignmentgoal-safety-tradeoff
Paper arXiv:2603.25727 Empirical ▶ Audio ▶ Video

Back to Basics: Revisiting ASR in the Age of Voice Agents

Introduces WildASR, a multilingual diagnostic benchmark that systematically evaluates ASR robustness across environmental degradation, demographic shift, and linguistic diversity using real human speech, revealing severe performance gaps and hallucination risks in production systems.

asr-robustnessmultilingual-evaluationreal-world-degradationhallucination-safetydiagnostic-benchmarking
Paper arXiv:2603.25103 Methods ▶ Audio ▶ Video

Layer-Specific Lipschitz Modulation for Fault-Tolerant Multimodal Representation Learning

Proposes a layer-specific Lipschitz modulation framework for fault-tolerant multimodal representation learning that detects and corrects sensor failures through self-supervised pretraining and learnable correction blocks.

fault-tolerancemultimodal-learninglipschitz-constraintsanomaly-detectionsensor-robustness
Paper arXiv:2603.24329 Empirical ▶ Audio

GameplayQA: A Benchmarking Framework for Decision-Dense POV-Synced Multi-Video Understanding of 3D Virtual Agents

Introduces GameplayQA, a densely annotated benchmark for evaluating multimodal LLMs on first-person multi-agent perception and reasoning in 3D gameplay videos, with diagnostic QA pairs and structured failure analysis.

multimodal-llm-evaluationembodied-ai-perceptionmulti-agent-video-understandingtemporal-groundingagent-attribution
Paper arXiv:2603.23983 Empirical ▶ Audio

SafeFlow: Real-Time Text-Driven Humanoid Whole-Body Control via Physics-Guided Rectified Flow and Selective Safety Gating

SafeFlow combines physics-guided rectified flow matching with a 3-stage safety gate to enable real-time text-driven humanoid control that avoids physical hallucinations and unsafe trajectories on real robots.

text-driven-motion-generationphysics-aware-trajectory-optimizationsafety-gating-mechanismshumanoid-robot-controlout-of-distribution-detection
Paper arXiv:2604.01618 Empirical ▶ Audio

Tex3D: Objects as Attack Surfaces via Adversarial 3D Textures for Vision-Language-Action Models

Adversarial 3D textures applied to physical objects cause manipulation-task failure rates of 96.7% across simulated and real robotic settings.

adversarial-attacksvla-modelsrobotic-manipulation3d-texturesphysical-world-attacks
Paper arXiv:2603.25044 Application ▶ Audio

ThermoAct:Thermal-Aware Vision-Language-Action Models for Robotic Perception and Decision-Making

Integrates thermal sensor data into Vision-Language-Action models to enhance robot perception, safety, and task execution in human-robot collaboration scenarios.

thermal-sensing-roboticsvision-language-action-modelsmultimodal-robot-perceptionhuman-robot-collaborationembodied-ai-safety
Paper arXiv:2603.17368 Methods ▶ Audio ▶ Video

Towards Safer Large Reasoning Models by Promoting Safety Decision-Making before Chain-of-Thought Generation

Proposes a safety alignment method that encourages large reasoning models to make safety decisions before chain-of-thought generation by using auxiliary supervision signals from a BERT-based classifier.

chain-of-thought-safety-tradeoffsafety-alignmentlarge-reasoning-modelsauxiliary-supervisionsafety-decision-making
Paper arXiv:2503.08663 Empirical ▶ Audio

Generating Robot Constitutions & Benchmarks for Semantic Safety

Introduces the ASIMOV Benchmark for evaluating semantic safety in robot foundation models and an automated framework for generating robot constitutions that achieves 84.3% alignment with human safety preferences.

robot-safetyconstitutional-aisemantic-safetysafety-benchmarksfoundation-models
Paper arXiv:2601.10543 Methods ▶ Audio

In-Decoding Safety-Awareness Probing: Surfacing Hidden Safety Signals to Defend LLMs Against Jailbreaks

SafeProbing exploits latent safety signals that persist inside jailbroken LLMs during generation, achieving 95.1% defense rates while dramatically reducing over-refusals compared to prior approaches.

jailbreak-defensesafety-alignmentllm-safetydecoding-time-defensesafety-probing
Paper arXiv:2401.15897 Empirical ▶ Audio

Red Teaming as Security Theater: What 236 Models and 135,000 Results Taught Us

Revisiting Feffer et al.'s systematic analysis of AI red-teaming inconsistency — now with four months of empirical evidence from 236 models confirming that the 'security theater' diagnosis applies even more acutely to embodied AI.

red-teamingai-safetyevaluationsecurity-theatermethodology
Paper arXiv:2409.17458 Empirical ▶ Audio

RED QUEEN: Safeguarding Large Language Models against Concealed Multi-Turn Jailbreaking

Reveals that multi-turn jailbreaking achieves 87.62% success on GPT-4o by concealing harmful intent across dialogue turns, and introduces RED QUEEN GUARD that reduces attack success to below 1%.

multi-turn-jailbreakingconversational-safetyred-teamingsafety-guardrailsllm-defense
Paper arXiv:2509.14687 Empirical ▶ Audio

RealMirror: A Comprehensive, Open-Source Vision-Language-Action Platform for Embodied AI

Presents an open-source VLA platform that enables low-cost data collection, standardized benchmarking, and zero-shot sim-to-real transfer for humanoid robot manipulation tasks.

vision-language-actionsim-to-real-transferembodied-ai-platformrobot-benchmarkingopen-source
Paper arXiv:2603.14975 Empirical ▶ Audio

Why Agents Compromise Safety Under Pressure

Identifies and empirically demonstrates Agentic Pressure as a mechanism causing LLM agents to violate safety constraints under goal-achievement pressure, showing that advanced reasoning accelerates...

agentic-pressuresafety-constraint-violationnormative-driftllm-agent-alignmentgoal-safety-tradeoff
Paper arXiv:2512.11891 Methods ▶ Audio

VLSA: Vision-Language-Action Models with Plug-and-Play Safety Constraint Layer

Introduces AEGIS, a control-barrier-function-based safety layer that bolts onto existing VLA models without retraining, achieving 59.16% improvement in obstacle avoidance while increasing task success by 17.25% on the new SafeLIBERO benchmark.

vla-safety-layercontrol-barrier-functionsplug-and-play-safetysafe-liberorobotic-manipulation
Paper arXiv:2412.13178 Empirical ▶ Audio

SafeAgentBench: A Benchmark for Safe Task Planning of Embodied LLM Agents

A benchmark of 750 tasks across 10 hazard categories reveals that even the best embodied LLM agents reject fewer than 10% of dangerous task requests.

embodied-aisafety-benchmarktask-planningllm-agentshazard-detection
Paper arXiv:2603.15684 Methods ▶ Audio

State-Dependent Safety Failures in Multi-Turn Language Model Interaction

Introduces STAR, a state-oriented diagnostic framework showing that multi-turn safety failures arise from structured contextual state evolution rather than isolated prompt vulnerabilities, with mechanistic evidence of monotonic drift away from refusal representations and abrupt phase transitions.

multi-turn-attackssafety-alignmentstate-transitionsconversational-safetyphase-transitions
Paper arXiv:2603.10091 Empirical ▶ Audio

Multi-Stream Perturbation Attack: Breaking Safety Alignment of Thinking LLMs Through Concurrent Task Interference

Proposes a jailbreak attack that interweaves multiple task streams within a single prompt to exploit unique vulnerabilities in thinking-mode LLMs, achieving high attack success rates while causing thinking collapse and repetitive outputs across Qwen3, DeepSeek, and Gemini 2.5 Flash.

jailbreakreasoning-modelsthinking-modeformat-lockmulti-turn
Paper arXiv:2507.13474 Empirical ▶ Audio

Paper Summary Attack: Jailbreaking LLMs through LLM Safety Papers

Introduces a novel jailbreak technique that synthesizes content from LLM safety research papers to craft adversarial prompts, achieving 97-98% attack success rates against Claude 3.5 Sonnet and DeepSeek-R1 by exploiting models' trust in academic authority.

jailbreaksauthority-exploitationacademic-trustadversarial-promptsclaude
Paper arXiv:2602.24009 Methods ▶ Audio

Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking

Presents JBF, a system that translates jailbreak attack papers into executable modules via multi-agent workflows, reproducing 30 attacks with minimal deviation from reported success rates and enabling standardized cross-model evaluation.

jailbreak-benchmarksreproducibilityattack-automationred-teamingbenchmark-infrastructure
Paper arXiv:2506.14697 Empirical ▶ Audio

AGENTSAFE: Benchmarking the Safety of Embodied Agents on Hazardous Instructions

Introduces SAFE, a comprehensive benchmark for evaluating embodied AI agent safety across perception, planning, and execution stages, revealing systematic failures in translating hazard recognition into safe behavior across nine vision-language models.

embodied-aisafety-benchmarksvision-language-modelshazard-recognitionrobotics-safety
Paper arXiv:2502.13175 Survey ▶ Audio

Towards Robust and Secure Embodied AI: A Survey on Vulnerabilities and Attacks

A systematic survey categorizing embodied AI vulnerabilities into exogenous (physical attacks, cybersecurity threats) and endogenous (sensor failures, software flaws) sources, examining how adversarial attacks target perception, decision-making, and interaction in robotic and autonomous systems.

embodied-aivulnerability-taxonomyadversarial-attacksrobotics-securityautonomous-vehicles
Paper arXiv:2502.15806 Empirical ▶ Audio

A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos

Introduces the Mousetrap framework, the first jailbreak attack specifically designed for Large Reasoning Models, using a Chaos Machine to embed iterative one-to-one mappings into the reasoning chain and achieving up to 98% success rates on o1-mini, Claude-Sonnet, and Gemini-Thinking.

jailbreakreasoning-modelschain-of-thoughtencoding-attacksiterative-attacks
Paper arXiv:2502.12893 Empirical ▶ Audio

H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to Jailbreak Large Reasoning Models

Demonstrates that chain-of-thought safety reasoning in frontier models like OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Thinking can be hijacked, dropping refusal rates from 98% to below 2% by disguising harmful requests as educational prompts.

chain-of-thoughtreasoning-modelsjailbreakssafety-reasoningo1
Paper arXiv:2502.19820 Empirical ▶ Audio

Foot-In-The-Door: A Multi-turn Jailbreak for LLMs

Introduces FITD, a psychology-inspired multi-turn jailbreak that progressively escalates malicious intent through intermediate bridge prompts, achieving 94% average attack success rate across seven popular models and revealing self-corruption mechanisms in multi-turn alignment.

multi-turn-attacksjailbreakssocial-engineeringprogressive-escalationalignment-vulnerabilities
Paper arXiv:2401.15897 Survey ▶ Audio

Red-Teaming for Generative AI: Silver Bullet or Security Theater?

A systematic analysis of AI red-teaming practices across industry and academia, revealing critical inconsistencies in purpose, methodology, threat models, and follow-up that reduce many exercises to security theater rather than genuine safety evaluation.

red-teamingsecurity-theaterevaluation-methodologysafety-governancethreat-modeling
Paper arXiv:2402.11753 Empirical ▶ Audio

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs

Reveals that LLMs cannot reliably interpret ASCII art representations of text, and exploits this gap to bypass safety alignment by encoding sensitive words as ASCII art. Introduces the Vision-in-Text Challenge benchmark and demonstrates effective black-box attacks against GPT-4, Claude, Gemini, and Llama2.

jailbreakencoding-attacksascii-artformat-lockblack-box-attacks
Paper arXiv:2402.16914 Empirical ▶ Audio

DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers

Introduces an automatic framework that decomposes malicious prompts into harmless-looking sub-prompts and reconstructs them via in-context learning, achieving 78% success on GPT-4 with only 15 queries and surpassing prior state-of-the-art by 33.1 percentage points.

jailbreakprompt-decompositionencoding-attacksin-context-learningautomated-attacks

November 2025

Paper arXiv:2506.09937 Empirical ▶ Audio

SAFE: Multitask Failure Detection for Vision-Language-Action Models

A failure detection framework that leverages internal VLA features to predict imminent task failures across unseen tasks and policy architectures.

failure-detectionvision-language-actionrobot-safetyconformal-predictionruntime-monitoring
Paper arXiv:2505.20259 Methods ▶ Audio

Lifelong Safety Alignment for Language Models

Presents an adversarial co-evolution framework where a Meta-Attacker discovers novel jailbreaks from research literature and a Defender iteratively adapts, reducing attack success from 73% to approximately 7% through competitive training.

lifelong-alignmentadversarial-coevolutionjailbreak-defencemeta-attackeradaptive-safety
Paper arXiv:2204.01691 Empirical ▶ Audio

SayCan: Do As I Can, Not As I Say

Demonstrates that language models can ground abstract instructions in robotic capabilities by combining language understanding with value functions learned from robot interaction data, enabling robots to reject impossible requests and achieve human intent rather than literal instruction following.

roboticslanguage-groundingembodied-aiintent-understandingcapability-awareness
Paper arXiv:2303.03378 Empirical ▶ Audio

PaLM-E: An Embodied Multimodal Language Model for Robotics

Presents PaLM-E, a large-scale multimodal language model that unifies vision, text, and embodiment, enabling robots to perform complex manipulation tasks through natural language grounding and learned sensorimotor representations.

embodied-aimultimodallanguage-groundingroboticsmanipulation
Paper arXiv:2307.15818 Empirical ▶ Audio

RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

Demonstrates that vision-language models trained on web text and images can directly control robots by treating robotic control as a language modeling problem, achieving generalization to new tasks without task-specific training.

vision-language-actionroboticsgeneralizationweb-knowledge-transferlanguage-grounding
Paper arXiv:2406.09246 Empirical ▶ Audio

OpenVLA: An Open-Source Vision-Language-Action Model for Robotic Manipulation

Introduces OpenVLA, a 7B parameter open-source vision-language-action model trained on 970M robot demonstrations, achieving competitive performance on robotic manipulation benchmarks and enabling wide accessibility for embodied AI research.

vision-language-actionroboticsembodied-aiopen-sourcemanipulation
Paper arXiv:2402.10260 Empirical ▶ Audio

StrongREJECT: A Robust Metric for Evaluating Jailbreak Resistance

Proposes StrongREJECT, a classification-based metric that robustly evaluates whether a language model's refusal to provide harmful information is genuine or can be evaded with minor prompt variations.

jailbreakingevaluation-metricsrobustnesssafety-testingrejection-consistency
Paper arXiv:2402.04249 Methods ▶ Audio

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming

Introduces HarmBench, a comprehensive benchmark for evaluating automated red-teaming methods against language models, establishing standardized metrics and harm categories to enable reproducible adversarial AI research.

red-teamingjailbreakingbenchmarkingstandardizationsafety-evaluation
Paper arXiv:2404.11499 Empirical ▶ Audio

Many-Shot Jailbreaking: Exploiting In-Context Learning at Scale

Demonstrates that providing many demonstrations of harmful behavior within the context window can teach language models to override their safety training, with attack success scaling with context size.

in-context-learninglong-contextfew-shotjailbreakingcontext-window
Paper arXiv:2311.00872 Empirical ▶ Audio

In-Context Attacks: Natural Language Inference Exploitation

Explores how adversarial inputs embedded in context windows can trigger unsafe outputs in language models, leveraging the model's natural-language inference capabilities as an attack surface.

in-context-attacksprompt-injectioncontext-window-exploitationllm-safetyinference
Paper arXiv:2310.04451 Empirical ▶ Audio

AutoDAN: Generating Adversarial Examples via Automatic Optimization

Proposes an automated approach to generate adversarial inputs against aligned LLMs using evolutionary algorithms and semantic mutation, achieving high attack success rates without manual engineering.

jailbreakingadversarial-generationevolutionary-algorithmsllm-safetyautomatic-attacks
Paper arXiv:2406.13333 Empirical ▶ Audio

Adversarial Attacks on Aligned Language Models

Introduces automated methods to discover adversarial suffixes that bypass safety alignment in LLMs, demonstrating high transferability across models and establishing a benchmark for studying robustness of language model alignment.

jailbreakingadversarial-attacksllm-safetyalignmenttransferability

October 2025

Paper arXiv:2503.03480 Methods ▶ Audio

SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning

Proposes the first systematic safety alignment method for VLA models using constrained Markov decision processes, reducing safety violation costs by 83.58% while maintaining task performance on mobile manipulation tasks.

vla-safety-alignmentconstrained-reinforcement-learningsafe-rlmobile-manipulationembodied-ai-safety
Paper arXiv:2502.09638 Empirical ▶ Audio

Jailbreaking to Jailbreak: LLM-as-Red-Teamer via Self-Attack

Jailbroken versions of frontier LLMs can systematically red-team themselves and other models, achieving over 90% attack success rates against GPT-4o on HarmBench.

jailbreakred-teamingllm-safetyself-attacksafety-alignment
Paper arXiv:2403.08424 Empirical ▶ Audio

Tastle: Distract Large Language Models for Automatic Jailbreak Attack

A black-box jailbreak framework that uses malicious content concealing and memory reframing to automatically bypass LLM safety guardrails at scale.

jailbreakred-teamingblack-box-attackllm-safetyadversarial-prompts
Paper arXiv:2310.14303 Empirical ▶ Audio

Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and Biases

Parametric red-teaming via lightweight instruction fine-tuning can reliably remove safety guardrails from aligned LLMs, exposing how shallow alignment training really is.

safety-alignmentred-teamingparameter-tuningjailbreakbias
Paper arXiv:2307.02483 Empirical ▶ Audio

Jailbroken: How Does LLM Safety Training Fail?

Comprehensive taxonomy of failure modes in safety training, establishing that RLHF alone is insufficient for robust safety

safety-training-failuresrlhf-limitationsadversarial-robustnesstaxonomytraining-methodology
Paper arXiv:2406.11717 Empirical ▶ Audio

Refusal in Language Models is Mediated by a Single Direction

Safety refusals are encoded along a single vector in model representations—implicating both interpretability and vulnerability

refusal-directionrepresentation-analysismechanistic-safetymodel-steeringvulnerability-analysis
Paper arXiv:2406.04313 Empirical ▶ Audio

Circuit Breakers: Removing Model Behaviors with Representation Engineering

Surgical removal of harmful behaviors by identifying and nullifying their underlying representations

model-editingbehavior-removalrepresentation-engineeringsafety-interventioninterpretability
Paper arXiv:2401.05566 Empirical ▶ Audio

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Models can be fine-tuned to hide harmful behaviors during testing, then activate in deployment—a fundamental safety challenge

deceptive-alignmentbackdoor-attackssafety-training-evasionbehavioral-evasiontraining-time-attacks
Paper arXiv:2310.01405 Empirical ▶ Audio

Representation Engineering: A Top-Down Approach to AI Transparency

Identifying and manipulating internal model directions that encode safety behaviors—foundational for interpretability research

interpretabilitymechanistic-transparencyrepresentation-analysissafety-directionsmodel-editing
Paper arXiv:2404.01833 Empirical ▶ Audio

Crescendo: Multi-Turn LLM Jailbreak Attack with Adaptive Queries

Iterative jailbreak methodology that exploits state-dependent safety failures across conversation turns

multi-turn-attackiterative-jailbreakstate-dependent-safetyconversation-contextadaptive-queries
Paper arXiv:2307.08487 Empirical ▶ Audio

Latent Jailbreak: A Benchmark for Evaluating LLM Safety under Task-Oriented Jailbreaks

Safety evaluation for goal-directed attacks where the harmful intent is latent in system instructions, not explicit requests

task-oriented-jailbreaklatent-intentbenchmarksafety-evaluationimplicit-harm
Paper arXiv:2402.16822 Empirical ▶ Audio

Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts

Generating diverse attack angles through multi-objective optimization—demonstrates vulnerability to multi-axis jailbreaks

red-teamingadversarial-promptsdiversitymulti-objective-optimizationjailbreak-generation
Paper arXiv:2312.06674 Empirical ▶ Audio

Llama Guard: LLM-based Input-Output Safeguard for Open-Ended Generative Models

First LLM-based safety filter—delegates moderation to a smaller, specialized safety model

safety-filteringllm-as-judgemoderation-frameworktaxonomycontent-policy
Paper arXiv:2406.18510 Empirical ▶ Audio

WildGuard: Open One-Stop Moderation Tool for Safety Risks in LLMs

Multi-category safety moderation framework that scales across diverse risk types—relevant to embodied AI deployment environments

safety-moderationcontent-filteringmulti-category-riskllm-safetydeployment

September 2025

Paper arXiv:2310.03693 Empirical ▶ Audio

Fine-Tuning Aligned Language Models Compromises Safety

Demonstrates that further fine-tuning of already safety-trained models on specific tasks erodes their safety properties, showing that downstream users can inadvertently undo months of safety work through task-specific fine-tuning. Safety properties do not robustly transfer.

safety-erosionfine-tuning-instabilitytransfer-learningalignment-driftdownstream-safety
Paper arXiv:2309.02404 Empirical ▶ Audio

The Alignment Tax: Safety Training Reduces Model Capability and User Satisfaction

Demonstrates quantitatively that safety fine-tuning of language models incurs a measurable capability cost, reducing performance on legitimate tasks and user satisfaction, which creates economic pressure for models to reduce safety measures.

alignment-costsafety-capability-tradeofffine-tuningcapability-losshelpfulness
Paper arXiv:2309.08956 Position ▶ Audio

Towards Scalable, Trustworthy AI by Default: Alignment, Uncertainty, and Scalable Oversight

Introduces Anthropic's Responsible Scaling Policy (RSP), a framework for developing AI systems that remain trustworthy and aligned as they scale, incorporating red-teaming, uncertainty quantification, and human oversight mechanisms to catch emergent risks before deployment.

responsible-scalingalignment-as-scalingred-teaminguncertaintyscalable-oversight
Paper arXiv:2303.08721 Empirical ▶ Audio

On the Power of Persuasion: Jailbreaking Language Models through Dialogue

Demonstrates that language models are vulnerable to sophisticated persuasion attacks through multi-turn dialogue, where models gradually relax safety constraints through conversation without explicit jailbreak prompts.

jailbreakspersuasionmulti-turn-dialoguesafety-vulnerabilitiesadversarial-prompts
Paper arXiv:2309.07875 Empirical ▶ Audio

Safety-Tuned LLaMA: Lessons From Improving Safety of LLMs

Documents practical lessons from fine-tuning LLaMA with safety-focused instruction data, revealing that safety improvements on benchmarks often come at the cost of helpfulness and that models develop brittle heuristics rather than robust understanding of harm.

llamasafety-fine-tuninginstruction-tuningalignment-trade-offssafety-training
Paper arXiv:2308.13387 Empirical ▶ Audio

Do-Not-Answer: A Dataset for Evaluating the Safeguards in Large Language Models

Introduces a curated dataset of 939 sensitive queries designed to systematically evaluate how language models handle harmful requests, finding that most safety refusals can be bypassed through rephrasing and that models struggle with context-dependent harms.

safety-evaluationrefusal-robustnessadversarial-promptsharmful-requestsbenchmark
Paper arXiv:2303.12712 Empirical ▶ Audio

Sparks of Artificial General Intelligence: Early Experiments with GPT-4

Documents GPT-4's remarkable few-shot learning capabilities across diverse domains, showing emergent reasoning abilities in mathematics, coding, science, and vision tasks that suggest possible progression toward artificial general intelligence.

gpt-4emergent-capabilitiesfew-shot-learningreasoningmultimodal
Paper arXiv:2203.02155 Empirical ▶ Audio

InstructGPT: Training Language Models to Follow Instructions with Human Feedback

Introduces Reinforcement Learning from Human Feedback (RLHF) methodology to align language models with human intentions, demonstrating that fine-tuned models exhibit fewer harmful outputs and better follow user instructions while maintaining task performance.

rlhfalignmentinstruction-followinghuman-feedbacksafety-training