Across the country, institutions are writing AI policies. Hospitals are deciding whether to allow AI tools on patient portals. Universities are setting rules about AI use in classrooms. Government agencies are building digital services and choosing whether to block automated tools.

Most of these policies are being written without input from disability-rights experts. The result is a wave of blanket restrictions that do not distinguish between a scraping bot and a person with a disability using AI to understand their own medical records.

It does not have to be this way. Institutions can address legitimate security and integrity concerns without blocking the accessibility tools that millions of people depend on. Here is what a good AI accessibility policy looks like — sector by sector.

The core principle

Every good AI accessibility policy starts from the same legal obligation: covered entities must ensure effective communication with people who have disabilities. This is required by the Americans with Disabilities Act, Section 504 of the Rehabilitation Act, and — for healthcare specifically — Section 1557 of the Affordable Care Act.

The practical implication is straightforward: any AI policy that restricts tool use must include an exception for authorized accessibility tools, or the institution must provide an equally effective alternative accommodation. A policy that blocks AI without either of these safeguards may constitute an accessibility barrier under existing law.

For hospitals and healthcare systems

Healthcare is where AI comprehension tools may matter most — and where blocking them carries the highest stakes. Hospital discharge instructions are written at a 10th-grade reading level on average. Eighty-one percent of Epic-generated discharge documents exceed a 6th-grade reading level. Medicaid enrollment forms require an 11th- to 18th-grade reading level. For a patient with an intellectual disability who reads at a 3rd- or 4th-grade level, these documents are functionally unreadable without assistance.

A good healthcare AI accessibility policy should include these elements:

  • Allow patient-authorized AI tools on patient portals. If a patient wants to use an AI comprehension tool to understand their own medical records, the portal should not block that tool. Patients have a legal right to access their health data through third-party applications under CMS Patient Access API rules. Anti-bot measures should be calibrated to allow authorized accessibility tools.
  • Provide an institutional AI comprehension option. Not every patient will have their own AI tool. Hospitals should consider offering a built-in "explain this in plain language" feature within their patient portal. This serves patients who need comprehension support regardless of whether they bring their own tools.
  • Train discharge planning staff to ask patients whether they use AI or other tools to help them understand health information — and to document that in the accommodation process.
  • Ensure HIPAA compatibility. AI comprehension tools can be covered under existing HIPAA mechanisms — either through the patient's own authorization for personal tools, or through business associate agreements for institutional tools. Privacy protection and accessibility are not in conflict.
  • Review anti-bot and CAPTCHA deployments for accessibility impact. The Fourth Circuit's ruling in Real Time Medical Systems v. PointClickCare (2025) found that CAPTCHAs blocking automated access to health records "plausibly violated" the information-blocking prohibition when no specific security risk was articulated. Bot-detection systems that prevent accessibility tools from functioning create legal exposure under multiple federal frameworks.

For universities and school districts

Education is where the tension between AI restriction and AI accessibility is sharpest. Schools have legitimate concerns about academic integrity. But blanket AI bans without disability exemptions may violate Section 504 and IDEA — and the evidence suggests they are already harming students who need AI most.

Research from the Center for Democracy and Technology (2024-2025) found that 72% of students with IEPs and 504 plans have used generative AI — a higher rate than their peers — because they need it. These same students are more likely to be disciplined for AI use. AI detection tools disproportionately flag non-standard writing patterns — the same patterns produced by dyslexia and learning disabilities.

A good education AI accessibility policy should include these elements:

  • Exempt disability accommodations from AI restrictions. Any campus-wide AI policy must include a clear exception: students with documented disabilities who use AI as a comprehension aid are not violating academic integrity policies. This exception should be stated explicitly in the policy — not left to individual faculty discretion.
  • Include AI comprehension tools in the IEP and 504 plan process. IDEA requires IEP teams to consider assistive technology at every IEP meeting (34 CFR Section 300.324(a)(2)(v)). IDEA's definition of assistive technology is broad enough to include AI comprehension tools. IEP and 504 plan teams should actively consider whether AI tools would benefit students with comprehension barriers.
  • Distinguish between AI-for-comprehension and AI-for-generation. A student using AI to understand what a passage means is doing something fundamentally different from a student using AI to write an essay. Policies should reflect this distinction. Comprehension aids help students engage with the material. Content generators replace the student's own work.
  • Review AI detection practices for disability impact. If your institution uses AI detection software, audit it for disparate impact on students with disabilities. A Stanford study confirmed significant disparities in false flags for non-native English speakers — and the same patterns apply to students with dyslexia and learning disabilities. The first federal lawsuit alleging ADA disability discrimination from AI detection (Jane Doe v. University of Michigan, 2026) should be a warning signal.
  • Make school-generated documents AI-compatible. IEP documents, enrollment forms, and financial aid materials should be provided in formats that AI tools can process — meaning digital text, not scanned images. This costs nothing and removes a major barrier for parents and students who use comprehension tools.

For government agencies

Government agencies produce some of the most consequential and least readable documents that people encounter. During the Medicaid continuous enrollment unwinding (2023-2024), approximately 25 million people were disenrolled. Of those, 69-71% were disenrolled for procedural reasons — not because they were ineligible, but because they could not navigate the renewal process. For enrollees with cognitive or reading-related disabilities facing forms written at the 11th- to 18th-grade level, AI comprehension tools could be the difference between keeping and losing healthcare coverage.

A good government AI accessibility policy should include these elements:

  • Ensure benefits websites are AI-tool compatible. Bot-detection and CAPTCHA systems on government benefits portals should not block AI accessibility tools. The Plain Writing Act of 2010 already requires federal agencies to use clear language — but compliance is inconsistent and the Act has no private right of action. Making portals compatible with AI comprehension tools provides a practical alternative for users who cannot understand forms even when they are written in "plain language."
  • Offer built-in comprehension support. Government portals should consider adding AI-powered plain-language explanations directly into their application interfaces. A "What does this mean?" button next to each section of a Medicaid renewal form would serve all users — not just those with disabilities — while meeting accessibility obligations.
  • Train benefits caseworkers to recognize AI comprehension tools as legitimate accessibility aids. If an applicant brings an AI tool to a benefits appointment, the caseworker should treat it the same way they would treat a screen reader or an interpreter — as an accommodation, not a threat.
  • Test forms with actual users. Before publishing benefits applications, test them with people who have the reading levels typical of the applicant population. If the average Medicaid enrollee reads at a 5th-grade level, test the form with people who read at that level. If they cannot complete it without AI assistance, that is evidence that AI comprehension tools should be part of the accommodation framework.

The universal design opportunity

Good accessibility policy does not just protect people with disabilities. It often helps everyone. Curb cuts were designed for wheelchair users — now they are used by parents with strollers, delivery workers with carts, and travelers with luggage. The same principle applies here.

Fourteen percent of American adults — roughly 30 million people — score Below Basic in prose literacy. Many of them do not have a diagnosed disability. When institutions make their documents AI-comprehension-friendly — by providing digital text rather than scanned images, by building in plain-language features, by allowing comprehension tools to function — they help not just people with documented disabilities but everyone who struggles with complex text.

The institutions that get this right will not just comply with the law. They will serve their patients, students, and constituents better. They will reduce the errors, missed appointments, dropped enrollments, and failed renewals that result from incomprehensible documents.

The cost of getting it wrong

Institutions that write AI policies without accessibility exceptions face growing legal exposure. In healthcare, blocking AI tools on patient portals may simultaneously violate the CMS Patient Access API mandate, the 21st Century Cures Act information-blocking prohibition, and ADA/Section 1557 effective-communication requirements. In education, the first federal ADA lawsuit over AI detection has already been filed. In 2024, over 4,000 ADA lawsuits were filed alleging digital accessibility failures.

The legal risk of allowing accessibility tools is low. The legal risk of blocking them without alternatives is growing every day.

Institutions are writing these policies right now. The question is whether they will write them with accessibility in mind — or whether people with disabilities will have to fight for exceptions after the fact.

We recommend the first approach. It is better law, better policy, and better for the people these institutions serve.

Sources