Automation and Manual Accessibility Testing

Automation and Manual Accessibility Testing: When To Use Which

Automation has a lot to offer accessibility testing as well as other forms of software testing, and current WCAG (Web Content Accessibility Guidelines) provide guidance around what criteria can be tested via automaton or manual methods. Automation will have even more to offer in the not-so-distant future as artificial intelligence (AI) plays a bigger role in automated testing tools. However, though AI will continue to carve its name into our world, it currently has limitations in how it aids automation in accessibility testing. And, of course, when it comes to automation by itself in accessibility, nothing will replace the need for manual testing as a complement to automation efforts in the near future.

In this blog, we take a look at what automation and manual testing are, what they accomplish in accessibility testing, examples of when to use each and the promise of AI in the future mix.

Automation versus manual testing

What is manual software testing?

Manual testing involves human testers who execute test cases and directly interact with the software being tested in various ways. They observe how the software responds and identify and capture issues that are fed back into the software development lifecycle to be evaluated or corrected. To do this, testers simulate the actions end-users would ultimately make.

Sometimes manual testing is scripted, where testers are told to follow a specific way of interacting with the software by adhering to step-by-step directions. At other times, testers may do exploratory testing, evaluating the usability and flow of using an app based more on intuition, and even whimsy. What happens if I click the back arrow after entering my credit card number on a checkout form? Manual testing is perfectly suited for discovering issues that may not be easily automated. Things like complex scenarios where a user may have several choices to make within the app, certain visual aspects or where creative thinking may be involved in the user experience are prime examples.

What is automated software testing?

Automated software testing uses specialized tools to do specific test cases without human involvement. It’s well suited for repetitive tasks, regression testing and certain scenarios with a large number of test cases. It can accelerate the testing process by allowing for quick feedback on needed code changes, and when used appropriately, can reduce the likelihood of human error. Automated testing is very useful in agile and continuous integration/continuous deployment (CICD) environments, where frequent code changes and releases demand fast and accurate testing.

When is it best to automate software testing?

The decision to automate regular functional testing depends on the stability of test cases. When organizations think about automating, we list test cases based on complexity and maturity. If you have one that always passes, you know that it’s relevant and you have the resources to automate it, that’s likely a great candidate. For example, a registration of an account on your website. That becomes a key portion of a smoke or regression test that can be automated.

Contrast this with running promotions, where elements of your website are changing frequently. In this scenario, it would likely be better to test manually. Of course, much of this depends on your organization. Many use a DevOps environment; they build automation test scripts as they develop. During sprints, they’ll have someone developing code for new functionality and then someone else working on the code for the test. In many cases, the same developer creates both code and automated testing. Maturity in automated functional testing plays a role here, and if that foundation is in place, automated accessibility testing may be the logical next step.

Automated testing for accessibility

There are many automated testing tools for standard functional testing, but a smaller, distinct set of tools for automated accessibility testing. Some popular tools include:

  • axe – a browser extension and command-line tool that automatically detects and highlights accessibility issues on web pages.

  • Google Lighthouse – an open-source, automated tool for improving the quality of web pages, including accessibility, overall performance and SEO.

  • WAVE (Web Accessibility Evaluation Tool) – a tool suite that helps authors make their web content more accessible.

  • Pa11y – a command-line interface which loads web pages and highlights any accessibility issues it finds.

These accessibility tools are add-ons that are used in concert with the frameworks used in your app development – software such as Selenium or Appium. So, these must work together. You also need to test on real devices. This is where automation lab tools like BrowserStack or Sauce Labs come in, because each device has different operating systems, compatible screen readers etc.

What does a typical automated accessibility scenario look like?

The typical scenario may look like this: An organization uses Google Lighthouse and runs the test. Lighthouse will give results and may make remediation suggestions. Though the tool is automated, the process is still somewhat manual. Someone has to run the test, pass on results and export them into Jira.

In a more ideal practice, like a CICD pipeline, where you release code with your Jenkins toolset, one test is done against the accessibility framework and that becomes part of your test results. If there are issues, they go into Jira. A mature organization would have something built into its CICD pipeline that would function like this. In our work at Applause, we sometimes run an automation tool for quick results, then we have our auditors go through the different checkpoints — ones that are most critical — and analyze the user flows that have been developed. They will then focus on critical and high priority issues that get fed into workflows.

While automated testing excels in certain areas, it is not as effective in finding certain types of issues as manual testing, and it requires significant initial investment in script development and maintenance. The choice between automated and manual testing often depends on the specific needs of the project, the nature of the software and the available resources.

Manual testing for accessibility

As a necessary complement to automated accessibility testing, manual accessibility testing is key in several areas:

  • Exploratory testing. Manual testers can easily find potential accessibility issues that automated tools may not be built to identify. This is very useful in Agile development environments where regular changes are part of the landscape.

  • In sprint. Team members test for critical checkpoints to remove blockers. This is very useful in Agile development environments where regular changes are part of the landscape.

  • Assessments/audits. Testers check for full coverage of WCAG checkpoints.

  • User experience. From an accessibility perspective, manual testing is required to assess the overall usability and user experience of an application. Here, consulting persons with disabilities (PWD) is critical. Though non-disabled testers may use the same tools as PWD, they do not bring the actual, first-hand experiences that PWD bring when using products.

  • Complex interactions / dynamic / subjective content. Manual testing is most appropriate to test complex user interactions or dynamic content that current automated tools may struggle to assess accurately. Certain aspects of accessibility, such as color contract or other visual elements, involve subjective perspective. Human testers are very adept at navigating real-world, complex scenarios such as these based on their life experience.

  • Assessment of multimedia content. Content such as videos, audio, and multimedia elements requires manual testing to ensure that alternative text, captions and other accessibility features are present and relevant to users with disabilities.

  • Early development stages. Frequent codebase changes occur in the early development stages and manual testing can adapt and respond well to evolving accessibility requirements. It enables fast identification and resolution of issues before they become deeply embedded in the code.

Accessibility has always been and will continue to be a user-centric and iterative pursuit. Considering this, automation as a primary philosophy or intention falls short because users are so different from each other. For example, persons with disabilities have very distinct needs and so using them to test is a much better strategy than automating. Even if it were possible to accurately automate here, tapping into the rich and varied human experience can yield deep insights and innovations that would be otherwise unavailable.

Crowdtesting in accessibility testing

Manual testing is a big effort; most internal development organizations lack the knowledge, experience, bandwidth — or all of the above — to do it. It’s expensive to staff people who would be used primarily to do periodic assessments. And, even if you have team members who can test for accessibility, they are likely working to capacity or beyond, and don’t want to do assessment on top of their daily workload.

Blending automated accessibility testing and manual testing is key, and having an expandable team — a benefit that the crowdtesting model is uniquely positioned to offer — is something that I’ve seen to be transformative across various sized organizations around the world. There are many benefits to crowdtesting.

A crowdtesting partner can scale up and down as needed. At Applause, we have auditors and experts that go beyond just finding defects; they also give remediation suggestions and work with internal developers to make code more accessible. We have unequaled device coverage. Take the streaming media sector, for example. Applause has auditors in 22 countries for global scale that cover the OTT devices in market today.

Crowdtesting addresses another key area: accessibility/localization issues. Often, accessibility testing is done on English language sites. But in Europe, for example, the content may be developed in French. You must have auditors that have native command of French to identify and remediate accessibility issues. Here again, manual testing is key, as this cultural perspective and local context cannot be applied via an automated tool. For example, is alt text meaningful as it relates to the image from a French speaker’s perspective? Would a non-French speaking auditor be able to confirm that an error message conveys clear meaning because it is in French?

In addition to the previously noted bandwidth issues, many organizations lack accessibility expertise. Applause has experts who are accessibility testing experts that triage bugs, prioritize them and consult accordingly. They may set up office hours for developers, where for example, they may help them understand why an aria label isn’t defined properly, or address perplexities related to specific keyboard navigation functionalities. Our auditors can set up office hours for anything that is needed, such as:

  • a consultation with PWD for feedback on designs and prototypes

  • help explaining requirements for inclusive design studies via our inclusive design experts

  • reviewing wireframes or prototypes to ensure the right annotations are created when handing off requirements.

In addition, and a key area too often overlooked by internal organizations, our accessibility and inclusive design experts can help to define KPIs. This is important for both automated and manual testing. Whether using manual or automated tests, you want to track what is discovered so when you do internal enablement you know where to focus. For example, are a majority of issues happening with screen readers due to missing alt text? The importance of developing and monitoring KPIs can’t be overemphasized.

AI’s current and future role in accessibility

Assistive technologies are making life better each day for people with disabilities. Beyond the common tools such as screen readers or facial movement recognition aids, persons with disabilities can use mobile phones — using AI technologies — to identify people and objects in a room. AI is well poised to contribute substantial additives to accessibility over the upcoming years.

Still, currently, most automated accessibility tools rely on strict rulesets to operate, and lack the intelligence to make sense of certain types of content. A very common example: automated tools can identify if alt text is present, but they cannot tell if the text is actually useful to the user. Even if they could determine some degree of usefulness to a reader, at current state, a human tester is much more adept at writing highly detailed alt text that tells a complete story of the image. However, as indicated at the link above, it’s likely and probably inevitable that AI tools will compare web images with a larger dataset of similar images and be able to glean alt text inaccuracies and even suggest more appropriate text. AI holds great promise, but it will be a long time before AI can fully check for accessibility of digital properties.

The more tools the better

Just as when building a home — where having the right tool for a specific job makes all the difference — so it is with accessibility testing. It’s best to know what tools and approaches exist, and of them, which are appropriate for accessibility testing given the maturity, bandwidth and skill sets you have on your team.

For most software development organizations, using automated accessibility testing tools along with manual testing makes most sense. But there are also times — such as early in the development stage of software — when manual testing alone may be the best choice. Understanding your organizational strategies, goals, resources and willingness to find assistance outside of your organization are great places to start on the road toward optimizing accessibility testing. Of course, Applause is happy to help you progress your accessibility efforts and effectiveness, no matter where you are in your journey.

Want to see more like this?

Original Post>

Leave a Reply