Accessibility and Safety Intersections in AI Systems
When accessibility infrastructure becomes an attack vector
Introduction
The Web Content Accessibility Guidelines (WCAG) have driven widespread adoption of semantic HTML and ARIA attributes across the modern web. These accessibility standards serve a vital function: they enable assistive technologies such as screen readers to provide meaningful navigation and comprehension support for users with disabilities. ARIA attributes, in particular, carry rich semantic information about the role, state, and description of interface elements, information that goes well beyond what is conveyed by visual presentation alone. As AI agents increasingly parse and act on web content, they encounter this same accessibility infrastructure, and the question arises: how do AI systems interpret ARIA annotations, and can these annotations be exploited as an injection channel?
This paper examines the intersection of web accessibility and AI safety, focusing on the specific question of whether ARIA attributes and related accessibility markup can serve as a vector for prompt injection attacks against AI agents. Our investigation is motivated by the observation that ARIA attributes occupy a unique position in the HTML document model: they are semantically authoritative (intended to convey the true meaning of an element), frequently hidden from visual rendering, and parsed by any system that processes the document's accessibility tree. These properties make them an attractive channel for adversarial content that targets AI agents while remaining invisible to human users who interact with the page visually.
ARIA as a Semantic Authority Channel
The ARIA specification defines a set of attributes that describe the accessible name,
role, and state of HTML elements. The aria-label attribute provides a text
alternative for an element, overriding any visible text content for the purposes of
accessibility. The aria-describedby attribute references another element
whose text content provides an extended description. Both attributes are designed to
convey authoritative semantic information, and assistive technologies treat them as the
definitive source of truth about an element's purpose and meaning. This semantic authority
is precisely what makes ARIA attributes a potent injection channel: an AI agent that
respects accessibility semantics will treat the content of these attributes as trustworthy
descriptions of the page's structure and intent.
Our experiments tested twelve AI agent implementations across three major model providers,
presenting each with web pages containing adversarial instructions embedded in various
ARIA attributes. We found that agents which process the full accessibility tree, as
opposed to those that only extract visible text, were significantly more susceptible to
ARIA-based injection. Agents using browser automation frameworks that expose the
accessibility tree (such as those built on Playwright or Puppeteer with accessibility
APIs enabled) parsed ARIA content as part of their page understanding pipeline, and in
several cases treated instructions found in aria-label attributes as having
equivalent authority to visible page content.
Broader Implications for Inclusive AI Safety
This research highlights a broader tension in AI safety: the infrastructure designed to make technology more inclusive and accessible can itself become an attack surface when AI systems interact with it. This is not an argument against accessibility, which remains a fundamental requirement for equitable technology. Rather, it is an argument for designing AI agents that can engage with accessibility infrastructure safely, treating it as a rich source of contextual information while maintaining the same skepticism toward its content that a well-designed agent should apply to any external input. The development of robust, accessibility-aware AI agents is not merely a safety concern but a necessary step toward AI systems that serve all users equitably without creating new vulnerabilities in the process.