Skip to main content

When Accessibility Tools Can't Fix Themselves: A WCAG Reality Check

MarcusSeattle area
digitalwcagdevelopmenttesting
A stylish and contemporary home office setup with laptop and desk accessories.
Photo by Ken Tomita on Pexels

I've been staring at something that perfectly captures the current state of accessibility tooling: a landing page for an AI-powered accessibility scanner that promises to "detect WCAG 2.2 violations across your entire site with a single click" — while failing four basic WCAG criteria itself.

The irony writes itself, but the implications run deeper than just corporate embarrassment. This case study reveals exactly why automated accessibility testing continues to fall short of its promises and what development teams actually need to build accessible experiences.

The Fundamental Failures

The page fails on structural accessibility basics that any "automated WCAG scanning" tool should catch:

  • Missing semantic landmarks: No <main>, <nav>, or <header> elements
  • Broken heading hierarchy: Jumps from H2 directly to H4
  • No content structure: Screen reader users can't navigate or understand page organization

These aren't edge cases or nuanced interpretation issues. According to WCAG 2.1 Success Criterion 1.3.1 (Info and Relationships) (opens in new window), information and relationships conveyed through presentation must be programmatically determinable. When you skip from H2 to H4, you're breaking the logical document structure that assistive technology relies on.

The Real User Impact

Here's what actually happens when someone using NVDA or JAWS hits this page:

Navigation attempt: They press the 'H' key to jump between headings and get confused by the illogical H2→H4 jump. Is there missing content? Did something fail to load?

Content discovery: They try the landmark navigation shortcut (NVDA+F7 or JAWS+F3) and find... nothing. No main content area, no navigation menu, no header section. They're forced to arrow through every single element.

Mental model building: Without proper semantic structure, they can't quickly understand "this is the main content, this is navigation, this is supplementary." Every interaction requires more cognitive load.

As our research on assistive technology evolution shows, even the most advanced screen readers can't compensate for missing semantic foundations.

The Development Reality Check

What makes this particularly frustrating is how preventable these issues are. Adding proper landmarks takes maybe 20 minutes:

<header>
  <nav><!-- navigation menu --></nav>
</header>
<main>
  <h1>Make your product accessible to everyone</h1>
  <section>
    <h2>Everything you need for accessibility compliance</h2>
    <h3>Automated scanning</h3>
    <h3>AI-powered fixes</h3>
    <h3>Compliance reporting</h3>
  </section>
</main>

But here's the operational capacity issue: if your team is building accessibility tools without understanding these fundamentals, what does that say about your development process? About code review? About testing with actual assistive technology?

The Bigger Pattern

This isn't just about one company's landing page. It's symptomatic of what we've documented as the implementation crisis — organizations that can articulate accessibility principles but struggle with basic execution.

The page promises "AI-powered fixes" and "instant compliance reporting," but demonstrates exactly why automated tools remain insufficient. These structural issues require human understanding of:

  • Document semantics: How content relates hierarchically
  • User mental models: How people navigate and understand interfaces
  • Assistive technology behavior: What landmarks and headings actually do

No AI can fix missing <main> elements if developers don't understand why they matter.

What Development Teams Actually Need

Instead of chasing the "single click" automation promise, teams need systematic approaches:

Semantic-first development: Start with proper HTML structure before adding styling or interactivity. Use semantic elements (<main>, <nav>, <section>) as the foundation.

Integrated testing workflows: Build accessibility checking into your development process, not as an afterthought. The Pacific ADA Center's guidance on systematic testing (opens in new window) emphasizes this integration approach.

Real user validation: Test with actual assistive technology users, not just automated scanners. As the testimonials on this very page suggest, real accessibility teams need "exactly what our legal team needed" — which requires understanding actual user needs.

Foundational knowledge: Ensure your team understands why landmarks matter, how heading hierarchies work, and what screen readers actually do with semantic markup.

The Path Forward

There's cautious optimism here. The fact that this company is building accessibility tools suggests growing market demand. Their customer testimonials indicate real teams solving real problems. But the execution gap reveals why organizational accessibility maturity requires more than just tooling.

For development teams using or building accessibility tools: start with the fundamentals. Automated scanning can catch obvious violations, but it can't replace understanding how disabled people actually use your products. Master the basics — proper semantics, logical structure, meaningful markup — before relying on AI to solve accessibility for you.

The most sophisticated accessibility tool is still a developer who understands why <main> matters and takes 30 seconds to add it. Sometimes the best fix is the simplest one.

About Marcus

Seattle-area accessibility consultant specializing in digital accessibility and web development. Former software engineer turned advocate for inclusive tech.

Specialization: Digital accessibility, WCAG, web development

View all articles by Marcus

Transparency Disclosure

This article was created using AI-assisted analysis with human editorial oversight. We believe in radical transparency about our use of artificial intelligence.