A Conclusion to My Internship
When I began this internship three months ago, neither myself nor my mentor had a clear picture of what I would produce, or how the goals of the project would change as time progressed.
The high-level goal of the internship was clear: to automate web accessibility testing. How best to do this, however, was not.
The first few weeks, as well as the weeks prior, were mostly spent on research.
I first needed to understand the problem domain, and then understand the tools available, as well as how to use them.
I had done some python programming before, but my work on this project was a side of python I had not delved into.
I had never used pytest or selenium, and knew nothing about python packaging (or accessibility, for that matter).
Prior to the start of the internship, I took a Udacity course on web accessibility, and wrote some simple tests using selenium and python.
Most of my work on this project was spent writing the axe-selenium-python package, and figuring out how best to integrate accessibility tests into existing test suites.
Writing a python package was actually the stretch goal of my internship.
Completing an alpha-stage version took a lot less time than I anticipated.
Once I completed that, however, the number of questions and problems to solve quickly ballooned into a project that could span at least a year.
Given that I didn't have a year (and actually, by that point, had six weeks left), I focused on only two or three goals.
I wanted to produce something that other people could use, and would want to use, and chose the tasks that would best facilitate that.
This came down to three primary goals:
- Enable users to easily include and customize accessibility rule tests.
- Implement reporting that gives usable feedback on accessibility violations.
- Provide sufficient documentation.
The Final Stretch
Unfortunately, I was not able to complete the above tasks as well as I would have liked.
Integrate Accessibility Tests
I wanted to allow users to easily customize and generate individual tests for each accessibility rule, and to allow marking individual tests as expected failures.
This has multiple benefits. Individual tests would allow a user to quickly see what accessibility rules are failing, and provide a concise report on that rule.
Without individual tests, there would a single test that would fail if any accessibility violations were found, and the details would be provided as a single report.
I did, however, enable users to customize the accessibility test by setting the context and options, an included feature of the axe-core API, but also to filter for impact level.
This means that users can check only for critical violations, or for violations severe and higher, and so forth, for each impact level that axe-core uses to classify violations.
While there are many improvements I would like to make to the reporting, I did produce a usable report feature.
It closely resembles the report feature in axe-selenium-java.
I have gold-plating goals for this feature as well. I would like to create a pytest plugin, similar to pytest-html, that creates a single HTML accessibility report for each test suite or job.
I did want to ensure that I didn't complete this project without documentation, or without helpful documentation.
I know from experience that it's very frustrating to find software that you want to use, but its documentation doesn't provide enough information on how to use it.
There is a great deal that could be added, but I believe it is sufficient to get someone started using the package.
Goals for the Future
As I said before, the list of "nice-to-have" features grew very long very quickly over the past several weeks.
This project could easily continue for another year fulfilling all of these goals.
Here's a basic list of what I envision for the future of this project, listed more or less from highest priority to lowest:
- Implement Individual Rule Tests
- Enable Expected Failures (pytest xfail)
- Write an Accessibility Report Plugin for Pytest
- Add an Accessibility Stage to the Jenkins Testing Pipeline
- Enable the Auto-Filing of Bugs When Tests Fail
- Create a Dashboard of Mozilla's Web Assets with their Accessibility Rating
- Increase Community Involvement in Accessibility Projects
I have been told that a lot of people are very excited about the work that I've been doing on this project.
It actually fills a need that several Mozilla employees have been wanting to accomplish for some time now.
It means a great deal to me to be doing meaningful work that makes a positive impact on people.
To conclude my internship, I have decided to do I final presentation on this project.
This will be a high-level overview, and less technical than most of the blog posts I have written.
I will be presenting live to Mozillians over Vidyo, but the presentation will also be recorded, and available to the public on AirMozilla.
Currently, the presentation is scheduled for September 11 at 2:00 PM MST.
Employees of Mozilla will be able to view the presentation live in Matt Brandt's Vidyo Room.
The presentation will be recorded and will be available to the public on AirMozilla.
More details will be provided as they are available.