Post Image

How to Test Screen Reader Compatibility

By Andrew Martin on 17th September, 2025

    Screen reader testing ensures websites are accessible to users who rely on assistive technologies. It involves evaluating how screen readers interpret and navigate digital content, helping identify issues like poor structure, missing labels, or inaccessible elements. Here’s what you need to know:

    • Why it matters: Poor design can make navigation frustrating for visually impaired users. Testing helps fix issues and ensures compliance with standards like WCAG and ADA.
    • Tools to use: Common screen readers include JAWS (Windows, paid), NVDA (Windows, free), VoiceOver (macOS/iOS, built-in), and Narrator (Windows, built-in).
    • Setup tips: Install and configure tools properly, clear browser cache, and create a controlled testing environment.
    • Testing focus: Check headings, navigation, forms, interactive elements, and dynamic content. Use proper labels, ARIA attributes, and logical structures.
    • Improvement process: Categorize issues (critical, moderate, minor), fix them, and retest with multiple screen readers.

    Start small by focusing on key pages, and expand testing efforts over time to ensure accessibility for all users.

    Step-By-Step Screen Reader Testing with NVDA and JAWS

    NVDA

    Choosing Screen Readers for Testing

    When selecting screen readers for accessibility testing, it’s crucial to align your choice with your testing objectives and the needs of your audience. Factors like unique features, compatibility with browsers and operating systems, and platform support should guide your decision.

    JAWS (Job Access With Speech) is a paid screen reader for Windows developed by Freedom Scientific. It stands out with features like virtual cursor navigation, customizable speech settings, and advanced keyboard controls. JAWS is compatible with major browsers such as Chrome, Firefox, and Edge. Its licensing options are designed to accommodate different user requirements.

    NVDA (NonVisual Desktop Access) is a free, open-source screen reader for Windows. Maintained by a dedicated community, it supports modern web standards across all popular browsers. NVDA also integrates with multiple speech synthesizers and braille displays, making it an excellent choice for budget-conscious teams while still delivering robust testing capabilities.

    VoiceOver comes built into Apple devices, including macOS, iOS, and iPadOS. It offers unique features like rotor navigation and gesture-based controls on mobile devices, making it an essential tool for testing the accessibility of responsive designs and mobile applications.

    Narrator, Microsoft’s built-in screen reader for Windows, provides a more basic experience compared to specialized tools like JAWS or NVDA. However, it’s a solid option for establishing a baseline in accessibility testing.

    How to Select a Screen Reader

    The right screen reader depends on your target audience, platform requirements, budget, and the complexity of your application.

    For example, desktop users often prefer JAWS or NVDA, while mobile users typically rely on VoiceOver for iOS or TalkBack for Android. Since browser performance varies across screen readers, testing different browser–screen reader combinations can reveal specific accessibility challenges.

    Platform compatibility is another key factor. If you’re testing Windows applications, JAWS and NVDA are indispensable. For macOS or iOS apps, VoiceOver is the go-to tool. Smaller teams may find free options like NVDA a practical starting point, while larger teams or complex projects might benefit from the advanced features of commercial tools.

    Team expertise also plays a role. Familiarity with a specific screen reader, combined with access to detailed documentation and active community support, can streamline the process. Starting with one or two tools that cover your primary audience’s needs is a smart approach, allowing you to expand your testing suite as feedback and requirements evolve.

    Once you’ve chosen your tools, the next step is to set up the screen readers and configure your testing environment.

    Setting Up for Screen Reader Testing

    Once you’ve chosen the right screen readers, setting them up correctly is the next step to ensure reliable testing results. Since each tool has specific requirements, a proper configuration from the start can save time and eliminate potential issues later.

    Installing and Configuring Screen Readers

    NVDA is a free and open-source screen reader. To get started, download the latest version from the official NVDA website and run the installer with administrator privileges. During the installation, you’ll have the option to create a portable version or perform a full system-wide installation. For thorough testing, a full installation is recommended as it integrates seamlessly with Windows services and provides complete functionality.

    Once installed, you can customize NVDA’s speech settings to fit your testing needs. Open the NVDA menu by pressing NVDA + N, then navigate to Preferences > Settings. Under the Speech category, you can enable helpful options like "Speak typed characters" and "Speak typed words" for more detailed feedback during testing.

    JAWS requires a purchased license. Download the installer from the Freedom Scientific website and follow the steps in the installation wizard. After installation, open the Settings Center (accessible via Insert + F2) to configure JAWS. Setting the verbosity level to "Beginner" can be helpful for detailed announcements about page elements, making it easier to identify issues.

    VoiceOver, built into macOS, can be activated through System Preferences > Accessibility > VoiceOver or by pressing Command + F5. When you launch VoiceOver for the first time, the VoiceOver Utility will guide you through the setup process. This includes selecting voices and adjusting navigation preferences. Fine-tune these settings to ensure the best feedback during navigation.

    Narrator, available on Windows 11, can be enabled via Settings > Accessibility > Narrator or by pressing Windows + Ctrl + Enter. While Narrator offers fewer customization options, you can still adjust voice settings and enable features like "Hear advanced detail about controls" for more comprehensive feedback.

    With the screen readers installed and configured, the next step is to prepare a controlled environment for consistent and accurate testing.

    Preparing the Testing Environment

    The choice of browser plays a key role in screen reader performance. For example, Chrome is highly compatible with NVDA, Firefox works well with JAWS, and Safari is ideal for VoiceOver on macOS.

    To ensure consistent results, start by clearing your browser cache and disabling any extensions that could interfere with accessibility features. Ad blockers, in particular, can alter page structures, so it’s a good idea to create dedicated browser profiles specifically for accessibility testing.

    When testing keyboard navigation, rely solely on keyboard shortcuts and screen reader commands. On Windows, you can enable "Mouse Keys" in the Ease of Access settings as an alternative, but focusing on keyboard inputs ensures a more accurate evaluation of navigation.

    Set up your workspace thoughtfully. Using dual monitors can be helpful – one screen for the application being tested and the other for taking notes or reviewing screen reader output logs. This setup minimizes accidental interactions with the test environment and keeps your workflow organized.

    Before diving into testing, take time to review the structure of the website or application. Examine the HTML layout, identify interactive elements, and note any custom components. This preparation helps you distinguish between screen reader limitations and actual accessibility issues. Creating a checklist that includes navigation landmarks, headings, form labels, image alt text, and interactive elements ensures consistency across tests.

    For audio clarity, use high-quality headphones to clearly hear screen reader announcements. Test the audio output to ensure clarity without overwhelming volume. If available, enable audio ducking to reduce background noise during announcements, making it easier to focus on the feedback.

    Lastly, consider recording your testing sessions. Tools like OBS Studio or the built-in Windows Game Bar can capture both the visual navigation and the screen reader’s audio output. These recordings provide valuable context for identifying and addressing accessibility issues later.

    With screen readers installed and your environment ready, you’re set to begin systematic testing to ensure your digital products meet the needs of users who rely on assistive technology.

    sbb-itb-f6354c6

    Running Screen Reader Tests

    When testing your website for screen reader accessibility, it’s essential to take a systematic approach. This means carefully evaluating each component of your site using audio feedback to ensure it works as intended.

    Testing Website Structure and Navigation

    Start by assessing how your site’s structure translates for screen readers. Open your website, activate your chosen screen reader, and listen carefully to how the content is presented. The goal is to confirm that the structure makes sense when experienced through audio alone.

    Use heading and landmark navigation to check the hierarchy. For instance:

    • NVDA: Press "H" for headings.
    • JAWS: Use "Insert + F6" for the headings list.
    • VoiceOver: Try "Control + Option + Command + H."

    Headings should follow a logical order: H1 for the main page title, H2 for primary sections, and H3 for subsections. Landmarks should be clearly announced, offering meaningful context about the purpose of each section.

    Next, test the tab order by pressing the Tab key repeatedly without relying on screen reader-specific commands. The focus should move logically from one interactive element to the next, reflecting the visual layout. If the focus skips important elements or jumps unpredictably, this indicates a structural issue that needs fixing.

    Also, verify that page titles and descriptions are announced when the page loads. Navigate between pages and confirm that each has a unique, descriptive title to help users understand their location.

    Once the structure is validated, move on to testing the interactive elements.

    Testing Interactive Elements

    Interactive elements are critical for accessibility, so they need to function seamlessly with screen readers.

    • Forms: Use the Tab key to navigate through forms. Each field should have a clear label, and the screen reader should announce both the label and the field type (e.g., "Name, edit text"). Test error messages by deliberately leaving fields blank or entering incorrect data. These messages should be announced immediately and provide clear instructions for correcting the error.
    • Buttons and links: Buttons should be identified as "button", and links should include "link" in their announcement. Their text must describe the action or destination clearly, avoiding vague phrases like "Click here."
    • Dropdown menus and select boxes: Focus on these elements using both keyboard navigation and screen reader commands. The screen reader should announce the current selection and indicate that additional options are available. Use the arrow keys to navigate through the options, ensuring each one is announced.
    • Dynamic content: Test features like live notifications, form validation messages, or content that loads dynamically (e.g., infinite scroll). These updates should be announced promptly, so users are aware of changes.

    For custom elements like sliders or accordions, ensure that their states (e.g., expanded/collapsed, on/off) are announced, and the controls are described in a way that users can understand.

    After verifying interactive elements, dive into testing specialized accessibility features.

    Testing Accessibility Features

    Specialized features are what make a website truly accessible for screen reader users.

    • "Skip to Content" link: This is a must-have for users who want to bypass repetitive navigation menus. After the page loads, press Tab – the first focusable element should be a skip link that directs users straight to the main content.
    • Images: Decorative images should be ignored by the screen reader, while informative ones need descriptive alt text that conveys their purpose. For complex visuals like charts or diagrams, check for longer descriptions or accessible alternatives.
    • Video and audio content: Videos should include captions, and playback controls should be accessible. When focusing on a video player, the screen reader should announce controls like play, pause, volume, and full-screen options.
    • Data tables: As you navigate tables, confirm that column and row headers are announced. Using the arrow keys to move through cells, the screen reader should provide context about the current row and column.
    • Keyboard shortcuts and access keys: If your site uses custom shortcuts, ensure they don’t conflict with screen reader commands. These shortcuts should be easy to discover and well-documented so users can take advantage of them.

    Finally, test focus indicators by navigating with the Tab key. Every focusable element should have a visible indicator, making it clear which item currently holds the focus. If you’re unable to observe this yourself, ask a sighted colleague to assist.

    Keeping Track of Issues

    As you test, take detailed notes on any problems you encounter. Include information about the screen reader and browser used, along with the exact steps to reproduce the issue. This documentation will be critical for developers as they work to address the problems and ensure your site complies with ADA and WCAG standards.

    It’s also important to remember that different screen readers handle content in unique ways. Something that works flawlessly in NVDA might behave differently in JAWS or VoiceOver. Testing with multiple screen readers gives you a more complete understanding of your site’s accessibility.

    Understanding Results and Making Improvements

    Once you’ve tested your website’s navigation and interactive features, it’s time to turn those findings into meaningful changes. Start by organizing your notes on any issues you discovered and then use them to guide actionable improvements.

    Analyzing Test Results

    To make sense of your test results, group them into categories based on their severity and type:

    • Critical issues: These are major blockers, like inaccessible form submissions or broken navigation, that prevent users from completing essential tasks.
    • Moderate issues: These cause frustration but don’t completely stop functionality – think missing alt text on images that provide context.
    • Minor issues: These are smaller tweaks that improve usability, such as refining the reading order of content.

    Look for patterns in your data. For example, if multiple screen readers struggle with the same element, it’s likely a deeper issue. Say both NVDA and JAWS fail to announce a button’s purpose – that probably means the button lacks proper labeling in your HTML.

    Pay close attention to inconsistent behavior across different screen readers. While some variation is normal, major differences often signal coding problems. For instance, if a dropdown works in VoiceOver but not in JAWS, the issue could be related to ARIA implementation or keyboard event handling.

    Timing problems with dynamic content also deserve attention. If live regions fail to announce updates – or announce them too frequently – users might miss crucial information or feel overwhelmed by constant interruptions.

    These observations will serve as the foundation for the fixes you’ll implement.

    Fixing Issues and Retesting

    Once you’ve categorized the issues and identified patterns, it’s time to roll up your sleeves and start making changes. Tackle the most critical problems first, then move on to moderate and minor ones.

    HTML structure fixes are often the best starting point, as they can resolve multiple issues at once. Use proper heading hierarchies, landmark regions, and semantic elements to create a logical structure for screen readers. For example, wrapping your main navigation in a <nav> element and using consistent heading tags (<h1>, <h2>, <h3>) ensures compatibility across screen readers.

    ARIA labels and descriptions can address many labeling issues. Add aria-label attributes to buttons and links that lack descriptive text. Use aria-describedby to link form fields with their help text or error messages. For more complex widgets, include ARIA states like aria-expanded for collapsible sections or aria-selected for menu items.

    Focus management is critical for smooth navigation. Pay attention to tab order and visual indicators. Use the tabindex attribute carefully – avoid positive values that disrupt the natural flow. Apply clear CSS focus styles to highlight the active element. For modal dialogs or dropdown menus, trap focus within the component and return it to the triggering element when the interaction ends.

    Form improvements often involve refining labels, grouping related fields, and ensuring accessible error handling. Each form control should have a label element or an aria-label attribute. Use <fieldset> and <legend> to group related fields. Implement live regions for error messages so they’re announced immediately when validation fails.

    After making these adjustments, retest everything with the same screen readers and scenarios to ensure no new issues have popped up.

    Finally, validate your updates with real screen reader users. While technical testing can catch a lot, actual users often uncover usability challenges that automated tools miss. Their feedback is invaluable for confirming whether your changes truly improve the experience.

    Document your updates and establish a testing routine for future development. Accessibility isn’t a one-and-done task – it requires ongoing attention as your site evolves. Incorporate accessibility checks into your development process to prevent new issues from arising.

    Conclusion: Building Accessible Digital Experiences

    Screen reader testing lays the groundwork for creating digital experiences that work for everyone. It turns accessibility guidelines into practical improvements that genuinely benefit users who rely on assistive technologies.

    Key Takeaways

    Effective testing is all about being systematic. Start by selecting the right screen readers, creating controlled testing environments, and consistently evaluating navigation, interactivity, and accessibility.

    Consistency is critical when designing for accessibility. A website that behaves predictably across different screen readers offers a better experience than one packed with flashy features that function inconsistently. Prioritize a strong foundation with solid HTML structure, proper use of ARIA attributes, and a logical content flow before introducing complex interactions.

    If multiple screen readers struggle with a specific element, it often points to a deeper structural issue. These insights help you build more reliable and accessible interfaces from the start.

    It’s also important to recognize that screen reader users aren’t all the same. Some depend entirely on keyboard navigation, while others use touch gestures on mobile devices. Preferences vary – some users favor detailed announcements, while others prefer concise information. Testing with a variety of screen readers and gathering feedback from real users helps you account for this diversity.

    Use what you learn to refine and improve your testing process over time.

    Next Steps for Accessibility

    To keep accessibility at the forefront, integrate it into every stage of your development process. Don’t treat accessibility as an afterthought – it should be a continuous priority.

    Collaboration is essential. Designers need to think about screen reader users when creating wireframes and prototypes. Content creators should write with navigation and clarity in mind. Project managers must allocate time and resources for testing and adjustments to ensure accessibility.

    Tools like UXPin can help foster this collaborative approach. By allowing teams to create interactive, code-backed prototypes that incorporate accessibility considerations early on, UXPin ensures that products are built with accessibility in mind from the beginning. When designers work with real React components that include semantic structure and proper ARIA attributes, the final output becomes naturally more accessible.

    Regular evaluations are also vital to staying on top of accessibility. Technology evolves, content changes, and new standards emerge. What works today might need updates tomorrow. Schedule quarterly reviews for your most important pages and conduct full-site audits annually to maintain compliance and usability.

    Investing in thorough screen reader testing does more than meet accessibility requirements – it improves overall usability. Testing interactive elements often reveals issues that affect all users, not just those relying on assistive technologies. Clear navigation benefits everyone. Properly labeled forms reduce confusion for all visitors. As highlighted earlier, strong HTML, clear ARIA implementation, and collaborative design create better digital products.

    Start small by focusing on key pages and testing with one screen reader. Document your findings, address the issues, and gradually expand your testing efforts. Each testing cycle builds your skills and streamlines the process for the future.

    FAQs

    What are the main differences between screen readers like JAWS, NVDA, VoiceOver, and Narrator for website accessibility testing?

    JAWS stands out for its extensive customization options and is a go-to tool for professionals conducting detailed accessibility testing. However, this level of functionality comes with a higher price tag. On the other hand, NVDA offers a free, open-source alternative with excellent support for braille displays and OCR. While it doesn’t match JAWS in customization features, it remains a strong choice for many users.

    VoiceOver, exclusive to Apple devices, works seamlessly within the Apple ecosystem. Its intuitive tools, like rotor navigation, make it user-friendly, but its functionality is confined to macOS and iOS platforms. Meanwhile, Narrator, a free screen reader built into Windows, is more basic. It’s a handy tool for quick accessibility checks but isn’t designed for thorough testing.

    For detailed audits and comprehensive accessibility testing, JAWS and NVDA are the top picks. VoiceOver and Narrator, however, excel in simpler tasks or when working within their respective ecosystems.

    How can I make sure screen readers announce dynamic content updates on my website?

    To make sure screen readers properly announce updates to dynamic content, implement ARIA live regions with the right settings. For updates that aren’t time-sensitive, set the region to polite. This allows the screen reader to wait until it’s done with its current task before announcing the change. For updates that need immediate attention, set it to assertive so users are notified right away.

    It’s also important to include clear status messages when content changes. Managing focus effectively can help direct users to the updated content. Adding descriptive labels or notifications ensures these changes are communicated in a way that’s easy to understand, improving your website’s accessibility for everyone.

    What are the best practices for setting up a reliable screen reader testing environment?

    To achieve reliable and consistent results in screen reader testing, it’s important to use a variety of tools like VoiceOver, NVDA, and TalkBack. This approach helps simulate different user scenarios. Always test on real devices and operating systems that mirror the environments your users are likely to interact with. Don’t forget to include proper keyboard navigation and focus management in your testing process – these are critical for accessibility.

    Another key factor is using semantic HTML and ensuring all elements are labeled correctly. This allows screen readers to interpret and relay content accurately to users. By incorporating these practices, you can build a testing environment that prioritizes accessibility and improves the overall experience for all users.

    Related Blog Posts

    Still hungry for the design?

    UXPin is a product design platform used by the best designers on the planet. Let your team easily design, collaborate, and present from low-fidelity wireframes to fully-interactive prototypes.

    Start your free trial

    These e-Books might interest you