Guides

Web Accessibility Monitoring: Continuous Compliance

Alexander Xrayd

Alexander Xrayd

Accessibility Expert

Read time

5 min

Published

Nov 1, 2025

Team working together monitoring screens and data

Accessibility isn't a one-time project. Every code change, every content update, every third-party integration can introduce new accessibility barriers. Without continuous monitoring, you're flying blind.

Accessibility monitoring software provides ongoing visibility into your site's accessibility status. It catches regressions early, tracks improvement over time, and generates evidence of compliance efforts.

This guide covers how to set up effective accessibility monitoring – from choosing metrics to configuring alerts to building dashboards.

Why continuous monitoring matters

Think about everything that changes on a website:

Code changes: Every sprint adds new features and potential new issues
Content updates: Editors publish images without alt text, create inaccessible PDFs
Third-party scripts: Chat widgets, analytics, cookie banners – often inaccessible
A/B tests: Variants may not be tested for accessibility
CMS updates: New versions may change accessibility behavior

Without monitoring, problems accumulate. What was compliant last quarter may have dozens of new issues today.

The alternative is scary:

Annual audits find problems after they've existed for months. Customers have struggled. Regulatory risk has accumulated. Fixing is more expensive because code has changed.

Continuous monitoring:

Catches problems within hours or days, not months
Provides evidence of ongoing compliance efforts
Shows trends – are you improving or regressing?
Holds teams accountable with visibility

A study showed that websites lose an average of 3% of their accessibility score per month without active monitoring.

What to monitor

Not all metrics are equally useful. Focus on what drives action:

Primary metrics (dashboard headlines):

Critical issues count: WCAG Level A violations that block users
Serious issues count: WCAG Level AA violations that significantly impair
Total issues: Overall count for trending
Percentage change: Getting better or worse since last period?

Secondary metrics (deeper analysis):

Issues by page/section: Identify problem areas
Issues by type: Contrast? Alt text? ARIA? Shows where team needs training
Time to remediate: How long does it take to fix reported issues?
New vs. recurring: Are you fixing things or just finding the same problems?

What NOT to focus on:

Vanity metrics that can't be actioned
Counts without severity (not all issues are equal)
Absolute numbers without context (a 10-page site shouldn't be compared to a 1000-page site)

Setting up monitoring

Step 1: Choose your tool

Options range from free (scheduling Lighthouse CI) to enterprise (Siteimprove). Xrayd sits in the middle – powerful enough for serious monitoring, accessible pricing for teams and agencies.

Step 2: Configure scan scope

Which domains/subdomains?
Which pages? (All, or representative sample?)
How to handle authentication?
How to handle SPAs?

Step 3: Set scan frequency

Production: Daily or weekly
Staging: With each deploy
Development: In CI/CD pipeline

Step 4: Establish baselines

Run initial scans to understand current state. This becomes your baseline for measuring improvement.

Step 5: Set targets

Short-term: Eliminate critical issues within 30 days
Medium-term: Reduce serious issues by 50% in 90 days
Long-term: Maintain zero critical, under 10 serious

Effective alerting

Alerts should warn about things requiring immediate attention – not every change.

Good alerts:

'New critical issue discovered on checkout page'
'Issue count increased by >50% since last scan'
'Page X has become inaccessible (keyboard navigation blocked)'

Bad alerts (alert fatigue):

'Scan complete' (informative, not actionable)
'One new low-priority issue' (save for report)
Every issue on first scan (overwhelming)

Alert channels:

Critical: Slack/Teams for immediate visibility
Weekly summaries: Email digest
Dashboards: For on-demand checking

Escalation:

If critical issue isn't addressed within 48 hours, escalate to product owner or tech lead.

Dashboards and reporting

Different stakeholders need different views:

For leadership:

Overall score/trend
Risk level (critical issues exist?)
Comparison to last quarter
High-level recommendations

For development team:

Issue details by component/page
Code-level information
Priority/severity
Links to specific problems

For content team:

Content-specific issues (alt text, headings, link text)
Pages affected
Simple guidance on fixing

Dashboard layout example:

Row 1: Key numbers (critical, serious, total, trend arrow)

Row 2: Trend chart (issues over time)

Row 3: Breakdown (by type, by page)

Row 4: Action items (unresolved critical issues with age)

Reporting cadence:

Weekly: Technical team review

Monthly: Management summary

Quarterly: Executive report with trends

Test your site's accessibility

Free scan, no signup required

WCAG 2.1 AA check
2-minute scan
Actionable report

Frequently Asked Questions

How is monitoring different from periodic audits?+
Audits are point-in-time assessments. Monitoring is continuous. Audits go deep on a sample; monitoring goes broad across everything. Both are needed – audits for thoroughness, monitoring for catching regression.
Can monitoring replace manual testing?+
No. Monitoring uses automated tools that catch 30-40% of issues. Manual testing is still needed for screen reader experience, cognitive accessibility, and context-dependent problems.

Related Articles

View all
Server and network symbolizing automated scanning
Tech4 min

Automated Accessibility Scanning: How It Works

Read article
Checklist and planning notebook on desk
Tips5 min

8 Routines for Long-Term Accessibility Success

Read article
Person using laptop with accessible interface
Guides6 min

What is Web Accessibility? Complete Beginner's Guide

Read article
Screen showing SEO analysis and graphs
Tips4 min

Web Accessibility and SEO: How They Work Together

Read article