In 1990, when Congress passed the Americans with Disabilities Act, the most advanced assistive technology for communication was the TTY device — a machine that let deaf people type messages over phone lines. By 2000, screen readers had become standard tools for blind computer users. By 2010, real-time captioning and video remote interpreting were routine accommodations.
None of these technologies existed when the ADA was written. All of them are now recognized as auxiliary aids — the legal term for tools and services that help people with disabilities communicate effectively. Congress planned for this. The law was deliberately designed to grow as technology grows.
Today, millions of people with cognitive, learning, and neurological disabilities use AI tools to understand written text. These tools summarize complex documents, explain medical jargon in plain language, and answer follow-up questions about what a paragraph means. For people with dyslexia, ADHD, intellectual disabilities, traumatic brain injuries, and autism, AI comprehension tools do something no previous technology could: they make written information understandable.
We believe these tools should be recognized as the next auxiliary aid under existing law. No new legislation is needed. The legal principle is already there. Only the technology is new.
What the law already says
The ADA requires covered entities — hospitals, schools, government agencies, employers — to provide "auxiliary aids and services" so people with disabilities can communicate effectively. The statute defines auxiliary aids with a deliberately open-ended list, ending with the phrase "other similar services and actions" (42 U.S.C. Section 12103(1)).
The Department of Justice has been explicit about this. Its own guidance states:
"The ADA does not provide an exhaustive list of the auxiliary aids that may be used... New devices are being invented and new technologies developed. What is important is effective communication."
Congress was just as clear. The House committee reports on the ADA stated that the types of aids listed "are not to be considered an all inclusive list" and that "technological advances can be expected to further expand the means available" (H.R. Rep. No. 101-485, pt. 2, at 84, 108 (1990)).
This is not a loophole. It is the point. The auxiliary aids framework was built to accommodate technologies that did not yet exist — because Congress understood that the tools would change even as the need remained constant.
The technology-evolution principle
Every major assistive technology now considered a standard auxiliary aid was once new, imperfect, and met with skepticism. Consider the progression:
- TTY devices (1980s-90s) — let deaf people communicate by phone. Now a standard accommodation.
- Screen readers (1990s) — converted visual text to audio for blind users. Now a standard accommodation.
- Real-time captioning / CART (1990s-2000s) — converted speech to written text for deaf users. Now a standard accommodation.
- Video remote interpreting (2000s) — provided sign language interpreters via video. Now codified in federal regulations (28 CFR Section 36.303(b)(1)).
- Text-to-speech software (2000s) — read text aloud for people with visual or reading disabilities. Now a standard accommodation.
The pattern is always the same: a new technology emerges that helps people with disabilities access information. At first it is unfamiliar. Then it becomes accepted. Then it becomes required.
AI comprehension tools are at the beginning of this same path. They serve the same fundamental function as every tool on that list — transforming information from a format the user cannot access into one they can. The difference is the disability they serve: not blindness or deafness, but cognitive and reading-related barriers to comprehension.
Functional equivalence: the screen reader comparison
A screen reader converts visual text into audio so a blind person can understand it. An AI comprehension tool converts complex text into simplified language so a person with a cognitive disability can understand it. The function is the same: making information accessible to someone who cannot process it in its original form.
This is not a loose analogy. The legal standard is "effective communication" — and both tools achieve it for different populations. A screen reader without audio is useless to a blind user. A dense medical document without simplification is equally useless to a person with an intellectual disability who reads at a third-grade level.
The ADA explicitly includes "reading" and "communicating" among the major life activities it protects (42 U.S.C. Section 12102(2)(A)). People whose disabilities substantially limit reading comprehension are entitled to auxiliary aids just as much as people whose disabilities affect vision or hearing.
The comprehension gap is real and measured
This is not a theoretical problem. The gap between how documents are written and how people can read them is enormous and well-documented:
- Hospital discharge instructions are written at a 10th-grade reading level on average. The AMA recommends a 6th-grade level. A 2024 study in JAMA Network Open found that 88.7% of discharge summaries exceed a 7th-grade reading level.
- Medicaid renewal applications are written at an 11th- to 18th-grade reading level (mean: 15.5), while the average Medicaid enrollee reads at a 5th-grade level — a gap of 6 to 13 grade levels.
- IEP documents for special education students score at a 9.9 to 12.0 grade level. Only 13.3% of parents answer all five IEP comprehension questions correctly.
Fourteen percent of American adults — roughly 30 million people — score Below Basic in prose literacy. Sixty-one million Americans live with a disability. For people at the intersection of these groups, the documents that control access to healthcare, education, benefits, and employment may as well be written in a foreign language.
AI tools actually work for this
The question of whether AI comprehension tools are effective enough to serve as auxiliary aids is not a matter of speculation. Peer-reviewed research has answered it:
- A 2024 study in JAMA Network Open found that GPT-4 transformed 50 hospital discharge summaries with dramatic results: understandability scores increased from 13% to 81%, reading level dropped from 11th grade to 6th grade, and word count decreased from 1,520 to 338 — all while preserving clinical accuracy.
- Research published in the International Journal of Medical Informatics (2024) found that AI reduced neurosurgery abstracts from a 12th-grade to a 5th-grade reading level with over 95% content preservation.
- A biomedical text simplification study achieved 92-96% accuracy with 95.3% faithfulness to the original content.
Some will object that AI tools can make mistakes. This is true — but it has never been the standard for auxiliary aids. Medical interpreters average 31 errors per clinical encounter, with 63% of those errors potentially clinically consequential (Flores et al., Pediatrics, 2003). Signed English transliteration achieves 61% accuracy. Automated speech recognition captioning runs at roughly 75% accuracy. None of these error rates have disqualified these technologies as auxiliary aids.
The legal standard is not perfection. It is "meaningful access" — established by the Supreme Court in Alexander v. Choate (1985). AI comprehension tools meet or exceed the accuracy of many accepted auxiliary aids.
What we are asking for
AI Access Alliance is asking federal agencies — the Department of Justice, HHS Office for Civil Rights, and the Department of Education OCR — to do what the statute already supports:
- Recognize that AI comprehension tools may qualify as auxiliary aids under the ADA, Section 504, and Section 1557 when used by people with disabilities to understand text they are authorized to read.
- Clarify that blanket AI-blocking policies — on patient portals, in university classrooms, on government benefits websites — should include exceptions for authorized accessibility tools.
- Confirm that covered entities should consider AI comprehension tools as part of the accommodation process when requested by individuals with comprehension-related disabilities.
We are not asking for new law. We are asking for existing law to be applied to modern tools — exactly as it was designed to be.
The window is now
Institutions are writing AI policies today that will determine access for years. Hospitals are deploying anti-bot measures on patient portals. Universities are issuing blanket AI bans. Government agencies are building digital services without considering AI accessibility.
If disability-rights perspectives are not part of these decisions, the default policies will be blanket restrictions — and the people who lose the most will be those who depend on these tools the most.
The law is clear. The technology works. The need is urgent. AI comprehension tools should be recognized as what they are: the next auxiliary aid in a progression that has been running for thirty-five years.
Sources
- Americans with Disabilities Act, 42 U.S.C. Section 12103(1) (definition of auxiliary aids and services)
- 42 U.S.C. Section 12182(b)(2)(A)(iii) (Title III effective-communication requirement)
- 28 CFR Section 36.303 (implementing regulations, illustrative examples of auxiliary aids)
- 42 U.S.C. Section 12102(2)(A) (major life activities including "reading" and "communicating")
- H.R. Rep. No. 101-485, pt. 2, at 84, 108 (1990) (legislative history on technology evolution)
- ADA.gov, Effective Communication guidance (DOJ interpretation of non-exhaustive list)
- Alexander v. Choate, 469 U.S. 287, 301 (1985) ("meaningful access" standard)
- JAMA Network Open (2024), GPT-4 discharge summary simplification study
- Flores et al., Pediatrics (2003), medical interpreter error rates
- Int'l J. Medical Informatics (2024), AI readability reduction studies
- CDC (2018), disability prevalence statistics
- NAAL, adult literacy statistics
- Wilson et al. (2009); Pati et al. (2012), Medicaid readability data
- East Tennessee State University (2003), IEP readability data