Accessibility Testing: Why Does It Matter?
If you are a public sector organization or part of the healthcare industry, you’re probably well aware of accessibility testing and why it’s important for the business. If you have never heard of accessibility testing as a term, but you want to create a company’s product or service so that it can be used by as many people as possible, then you need your product to be designed for people with disabilities.
Do you know how complex accessibility testing is? How much time does it take for a QA to perform it? How many categories are there for accessibility testing?
In this blog post, you’ll find the answers to these questions and other important information about accessibility testing.
How Is Accessibility Testing Performed?
Depending on your industry, where your organization is based, and other factors, your company may need to audit against a specific accessibility regulation or define specific requirements for a product to be audited against.
Common regulatory standards include Section 508 for federal government websites and ADA for private entities and state and local governments. The Web Content Accessibility Guidelines (WCAG) are a broader set of accessibility guidelines developed with the goal of creating a shared international standard. WCAG 2.0 and WCAG 2.1 are commonly used standards in accessibility testing.
In addition to those standards, there can be many other specific requirements for users with varied disabilities such as limited hearing or color sensitivity.
If your software is extremely complex, a subset of screens or pages may be selected for the accessibility audit. For example, an admin page with limited internal users may not be required to meet all accessibility criteria. On the other hand, a public-facing product page should always match all of the requirements for the selected guidelines and regulatory standards.
Once you’ve identified the necessary requirements and regulations that apply to your product and the pages that need to be tested, you need to build an accessibility checklist. The accessibility checklist should cover all five major categories of disabilities: visual, speech, hearing, motor, and cognitive.
Accessibility Tools and Techniques
There is manual and automated accessibility testing. Automated testing is performed by various accessibility testing tools that check the accessibility of the applications. These tools differ on different platforms, and they should be chosen carefully depending on the applications’ needs, budget, and other factors. Manual accessibility testing is carried out by humans who use the application and check for defects. At MentorMate, we’ve also introduced people with disabilities to accessibility testing to check the software, as they have the best perspective.
Manual testing/human checking — Manual execution of a predefined checklist.
Testing with assistive technology — Tests performed with assistive software, including: speech recognition software, screen reader software, screen magnification software, and special keyboards — JAWS, NVDA, ZoomText, and Dragon NaturallySpeaking.
Testing with web accessibility evaluation tools — Tests performed with automation tools to validate CSS, HTML code, and other components.
Accessibility testing with disabled people with no ability to see — Tests performed by a QA with visual disabilities, who executes the whole workflow as an end-user of the system and reports defects.
Accessibility Testing on Mobile Devices
Accessibility testing helps make mobile applications usable by people of all abilities or disabilities. It ensures that everyone is treated equally. When considering mobile users, don’t just limit your approach to users who are blind. Other things to consider are:
- Partial sight (especially when using a smaller screen)
- Color blindness
- Dyslexia and other cognitive issues that can affect the readability of a site
- Arthritis and other finger joint motor issues
- or a combination of these issues, which makes addressing accessibility even harder
Our team evaluates mobile applications based on the accessibility guidelines for Section 508 and WCAG 2.1. The testing is relevant for both native and hybrid apps, and also applies to mobile and responsive websites.
We use a variety of tools to execute checks for accessibility issues related to visual defects. The actual user testing is executed by a person with disabilities using assistive technologies, and then we analyze the barriers faced by them. Our goal is to improve the user experience and ensure that people with different disabilities can use our software. Some of the tools we use include:
Accessibility Scanner is a tool that suggests accessibility improvements for apps. Suggestions include: enlarging small touch targets, increasing contrast, and providing content descriptions so that individuals with accessibility needs can more easily use your app.
Screen readers are programs that allow blind and low-vision individuals to read the content on a computer screen with a voice synthesizer or braille display. The screen reader is the interface between the user, the operating system, and its applications.
Another approach to mobile accessibility testing is to use gestures. In our testing, we are combining this tool with the Screen reader tool. There are three types of gestures based on the user’s needs: one-finger gestures, two-finger gestures and three-finger gestures. The basic navigations are:
- Touch the screen to select items
- Tap two times for activating items
- Selecting previous or next item by swiping left or right
- Swipe up or down in order to navigate and select rotor option
- Swipe up to read the selected area
- Swipe down to read the selected item
- Scrub back and forth to close pop-ups
- Pinch out/in in order to select or deselect
- Rotate to open the web rotor and select rotor items
- Swipe left or right in order to navigate to different screen
- Swipe up or down in order to scroll the screen
WCAG Accessibility Checklist
The WCAG Accessibility Checklist app is a powerful accessibility checker and reporting tool, offering a clear To-Do list. It delivers reminders and checklists to help you achieve the three tiers of accessibility compliance: Levels A, AA, and AAA. This is a great resource to use if you are testing, developing, or designing apps and need to understand how accessible your content is for people of different abilities.
Android and iOS both include screen magnifiers within their accessibility options. Screen magnification works by zooming in on either the whole screen or on specific sections of the screen. For example, the tool can enlarge form fields, allowing a user with disabilities to more easily fill them. Some screen magnifiers can enlarge text, icons, and other graphics up to 64 times. They can also enlarge and enhance mouse and text cursors to make them easier to see and track, and even sharpen edges, increase contrast, and change colors to make things easier to see.
By using different keyboards, we can test if a user with motor disabilities can reach all items on the screen; focus an element; navigate through different pages and sections, modals, pop-ups, forms, and panels; or directly skip main content. It’s essentially used to find issues with missing keyboard indicators and non-focusable or non-keyboard operable controls.
The Audit Report: How Are Recommendations Documented?
The audit report is an essential part of accessibility testing. It describes the conformance of the website/mobile application with the desired standards. Based on the evaluation, you can determine whether the website/mobile application meets or does not meet the guidelines for the desired level.
According to the World Wide Web Consortium (W3C) requirements, a report created at the end of the audit should follow this format:
- Executive Summary — A brief overview of the website’s conformance with the accessibility criteria; stating whether the website meets, does not meet, or is close to meeting the requirements.
- Scope of Review — Include the name of the website or app and its purpose, the site’s base URL, URLs included in the review, URLs excluded from the review, the exact date on which the review was conducted, and the natural language used.
- Reviewers — Include the names of everyone who has reviewed the website, or the organization’s name.
- Review Process — Define the target users and determine the overall scope. Identify the level for which the conformance will be tested, e.g., WCAG 2.1 Level A, AA, or AAA. List the evaluation and validation tools used as well as the versions thereof. Also, include a description of the manual reviews used (usability testing of accessibility features).
- Results and Recommended Actions — A summary of the review results, detailed results, and points for improvement. Here, you must provide information regarding whether the website/mobile application meets the criteria, what is urgent to fix, and what can be fixed later. Also, based on the guidelines, detailed information should be provided with links for success criteria, techniques for all non-conformant items, and recommendations for addressing the items above.
- References — Provide a list of links that were used for references.
Accessibility Testing Performed by a Person with Disabilities: What Is the Difference?
MentorMate has incorporated users with disabilities into our accessibility testing, as they can provide the necessary perspective on issues they most frequently encounter. Even non-technical testers can quickly point out issues that experienced quality assurance specialists might miss if they don’t have an understanding of the challenges that people with disabilities face.
At MentorMate, we’ve unified our processes to ensure that both QAs and non-technical testers have the same understanding of what accessibility testing is and how it is done. We’ve provided them with both theoretical and practical knowledge on how we test, track the defects, and create reports for our clients.
Our colleagues with disabilities have demonstrated how they navigate through different websites while we observe the process, helping us incorporate new techniques into our testing. We’ve taken two approaches: the first, a person with vision impairments testing a website they aren’t familiar with, and later testing a website they had detailed information about (pages and elements on them).
Each method provides different benefits.
When testing unfamiliar websites, you can quickly understand how user-friendly it is and whether the tester can complete any given task without being guided — only relying on the screen reader. With this approach you can uncover many overlooked issues, inaccurate page structure, and incorrect navigation. We also ask testers to “think aloud” and describe what they are doing while they are doing it. This might sound like, “I’m trying to navigate with my keyboard to the careers link, but the tab order seems off.” A researcher would observe and ask questions to see if the interface meets their expectations or where they experience any difficulties. They might say, “You mentioned that the tab order seems off; what were you expecting to happen?” The benefits of this approach include getting the perspective of real people with disabilities, getting a fresh perspective, getting a perspective of non-technical people (if you recruit specifically for that), and getting a non-biased perspective.
On the other hand, when testing a website knowing what it is supposed to contain, you can discover that some parts of the system are simply unreachable for people with vision impairments and others are difficult to navigate. In addition to uncovering many hidden accessibility defects, a tester who is blind can also provide sharper suggestions for improvements and clearer explanations why certain issues prevent customers from utilizing your website or app.
This two-way learning process provides QAs with much deeper insights into accessibility testing, teaching new techniques that we now include in our training. Furthermore, our colleagues with disabilities have significantly improved their testing skills by adopting QA defect tracking and reporting procedures.
Training Non-Technical Testers: How to Report the Defects?
The first challenge for anyone who is not a QA is understanding the processes and the techniques used for testing, and how defects are logged. Achieving that can require multiple training sessions, walking new testers with disabilities through activities both in theory and in practice.
One of the struggles we encountered was teaching our colleagues specifically how to report the defects they found — not just describing them as plain text but following a specific structure with clear and concise explanations of the issue and how it could be fixed. It’s important for all testers to give improvement suggestions and also to describe the exact impact bugs had on their user experience.
The project we used for the initial training had so many accessibility issues that our colleagues with vision impairments couldn’t actually access most of the system. This led to the idea of creating a document describing the content of each page, so that they could have a better understanding of the platform. When using Excel or Google Sheets to track defects, we also recommend documenting keyboard shortcuts for testers, to make it easier to navigate those documents.
The addition of these unique perspectives has been an invaluable benefit to our testing team. Without them, we would not have the understanding and skills required to perform thorough accessibility testing and improve the quality of our products for all users.
Accessibility testing is important for businesses because it not only makes their websites accessible to more users, but it might also have some legal consequences if not done right. For instance, a person with disabilities could sue a company if its website doesn’t allow for equal access and usage of its features. There has been an increasing number of such lawsuits over website accessibility in recent years. A lawsuit of this type could lead to reputation damage and financial loss for the company, regardless of the outcome.
Original post found here.
This blog post was prepared by Yoana Lalova, Velislava Getsova, and Melis Meisut.