When people hear that an organization is advocating for "AI access rights," a fair question follows: whose rights, to do what, exactly?

The answer matters. AI is being used for everything from generating essays to scraping websites to making hiring decisions. Not all of those uses deserve legal protection. Some of them deserve legal restriction.

AI Access Alliance exists to advocate for one specific use case — and to be precise about the boundaries of that use case. This article lays out exactly what we are asking for, what we are not asking for, and why the distinction matters.

What we are asking for

Our position has four parts. Each one is necessary. Together they define the scope of our advocacy.

1. User-directed AI comprehension tools should be recognized as auxiliary aids under existing disability law — the Americans with Disabilities Act, Section 504 of the Rehabilitation Act, and Section 1557 of the Affordable Care Act. These laws already require covered entities to provide "auxiliary aids and services" for effective communication. AI comprehension tools should be recognized as part of that category.

2. The user must have a disability that affects reading comprehension. We are talking about accommodations for people with specific disabilities — dyslexia, ADHD, intellectual disabilities, traumatic brain injuries, autism, and other conditions that substantially limit reading or understanding. This is not a general right for everyone to use AI on any text for any purpose.

3. The user must already be authorized to access the text. A patient reading their own discharge instructions. A parent reviewing their child's IEP. A benefits applicant filling out a Medicaid renewal. A tenant reading a lease. The AI tool helps them understand information they already have the right to read. It does not grant access to information they are not entitled to see.

4. The AI tool must be user-directed. This means a person opens a document, sends it to their comprehension tool, and asks the tool to explain it. The person is in control. The tool acts at their direction, on their behalf, for the purpose of understanding.

That is the full scope of what we are asking for. We believe this use case is already covered by existing law and that federal agencies should issue guidance saying so.

What we are not asking for

We are deliberate about what falls outside our advocacy. These boundaries are not a rhetorical concession — they are the foundation of our argument's strength.

We are not asking for scraping rights. Our argument does not create or support a right for AI systems to crawl websites, harvest data, or bypass access controls for non-disability purposes. A person using AI to understand their own medical records is fundamentally different from a company using AI to scrape copyrighted content. Our argument applies to the former, not the latter.

We are not asking for autonomous agent access. This is about user-directed comprehension tools — not AI bots that act on their own, navigate systems independently, or make decisions for people. The user reads the document. The user asks the AI to help explain it. The user acts on the explanation. The human stays in the loop.

We are not asking for universal AI access rights. Not all AI tools are auxiliary aids. A text generator is not an auxiliary aid. An AI essay writer is not an auxiliary aid. The argument applies specifically to tools used by people with disabilities for the purpose of comprehending text they are authorized to access.

We are not asking to override copyright or access controls. Where access controls serve purposes unrelated to communication — such as copyright protection or subscription paywalls for general content — our argument does not apply. We are focused on the specific scenario where a person has the right to read a document and needs AI to understand it.

We are not asking institutions to permit any particular AI product. The legal obligation is to ensure effective communication. How an institution meets that obligation involves the interactive accommodation process — the same process used for all disability accommodations today. An institution might permit a patient's own AI tool, provide an institutional AI comprehension service, or offer another accommodation that achieves the same result. The method is flexible. The obligation is not.

Why these boundaries matter

Some people will wonder why we draw these lines so carefully. There are two reasons — one legal, one strategic.

The legal reason: The ADA's auxiliary aids framework has always been bounded by the user's authorization and the communication's purpose. A blind person's screen reader processes text the user has navigated to. A sign language interpreter attends meetings a deaf person has the right to attend. These are user-directed, purpose-limited tools. AI comprehension tools fit within those same boundaries. If the argument is expanded beyond them, it loses its legal grounding.

The strategic reason: Some critics will try to characterize our advocacy as a front for the AI industry — a way to use disability rights as a pretext for expanding AI access to all content. This characterization is wrong, and the best way to prove it is wrong is to be explicit about what we do and do not advocate for.

The screen reader analogy is instructive here. A blind person's screen reader accesses the same HTML as a scraping bot. Nobody argues that protecting screen reader access requires allowing all web scraping. The use case is different. The purpose is different. The legal framework treats them differently. The same logic applies to AI comprehension tools.

What we want federal agencies to do

Our ask is directed at three federal agencies — the bodies responsible for enforcing disability-rights law in the United States:

Department of Justice, Civil Rights Division: Issue guidance confirming that AI comprehension tools may qualify as auxiliary aids under ADA Titles II and III. Clarify that blanket AI-blocking policies without exceptions for authorized accessibility tools may create barriers that violate effective-communication requirements.

HHS Office for Civil Rights: Issue guidance under Section 1557 confirming that healthcare entities' effective-communication obligations extend to ensuring compatibility with AI comprehension tools used as auxiliary aids. Clarify that anti-bot measures on patient portals should include exceptions for authorized accessibility tools.

Department of Education, Office for Civil Rights: Issue guidance under Section 504 confirming that educational institutions' AI-use policies must include exceptions for AI tools used as disability accommodations. Clarify that IEP and 504 plan teams should consider AI comprehension tools as potential assistive technology.

We are not asking any of these agencies to create new rules. We are asking them to state what the law already says — that when someone with a disability uses AI to understand text they are already authorized to read, that is an auxiliary aid, and blocking it without providing an equally effective alternative may constitute an accessibility barrier.

The principle behind the position

Underneath the legal language and policy recommendations, our position rests on a simple idea: people with disabilities have the right to understand information that affects their lives.

A patient has the right to understand her discharge instructions. A parent has the right to understand her child's IEP. A veteran has the right to understand his benefits letter. A tenant has the right to understand his lease.

If they need an AI tool to do that, and they are using it on documents they are already entitled to read, no institution should block that tool without providing an equally effective alternative.

That is what we are asking for. Nothing more. Nothing less.

Sources