Designing for the User
The first part of this project was to create a rough draft of the package, and a set of both unit tests and integration tests.
For the first several weeks, I only used the package locally from the project folder, which turned out to be much easier than writing a working, pip-installable package.
So that was the next goal to accomplish—but I also wanted to ensure to follow best practices and maintain stability.
These tasks presented many issues I had not yet considered, and many of them I did not (yet) know how to answer.
Writing a package that will be used by other people poses an important question.
"How will other people use this module, and how do I make it most useful to them?"
This week, I will be adding my tests to a Jenkins pipeline, so that we can see them in action.
I expect that this will help me answer some of the questions I've been struggling with.
Without this feedback, most of the decision-making about design and implementation is just guesswork.
Best Practices and Stability
This simplified the first stage of the process, as the most basic requirements were already fulfilled.
However, I have done quite a lot of rearranging and re-rearranging in the process of deciding what this package should look like.
There were also some changes that needed to be made so that axe-selenium-python would run within another project.
(These mostly concerned correctly referencing files, fixtures, and tests within the package.)
I did get some excellent code review from Tarek Ziade, a member of the Firefox Test Engineering Team.
Tarek has written multiple books on python, so I was a little intimidated when he offered to review my code.
However, I strive to produce the best code possible, so I always welcome constructive criticism.
He pointed out several things I had either missed or hadn't considered.
I credit his feedback for helping me take this package from a rough draft state to an early-stage MVP.
_DEFAULT_SCRIPT = os.path.join(os.path.dirname(__file__), 'src', 'axe.min.js')
I did find that something interesting happens with this line of code, however.
If I run tests from within the project folder, it will look for the
src directory within the
If I run tests externally, within another project, it will look for the
src directory within the top-level directory of the package,
This code also uses
os.path.join to create the OS-independent file path.
For example, if this package is run on Windows, the file path will use back slashes:
And forward slashes will be used for Unix-based operating systems.
Deploying to PyPi
I had some difficulty figuring how to upload to the Test PyPi site.
Much of the documentation I found was outdated, and I was receiving "server gone" errors when trying to upload to the test site:
Server response (410): Gone (This API has been deprecated and removed from legacy PyPI in favor of using the APIs available in the new PyPI.org implementation of PyPI (located at https://pypi.org/). For more information about migrating your use of this API to PyPI.org, please see https://packaging.python.org/guides/migrating-to-pypi-org/#uploading. For more information about the sunsetting of this API, please see https://mail.python.org/pipermail/distutils-sig/2017-June/030766.html)
I did finally get it working, however.
Here's the wiki page that finally got me past this point, if you find yourself in the same position.
Another problem I encountered involved the use of pytest fixtures.
I have gone back and forth on whether to use the fixtures at all. As of now, there are some instances where I do, and some where I don't.
The users are free to use either method.
What is a fixture?
If you're interested, here is the technical description of a pytest fixture.
As I understand it, the fixtures make some tasks easier and less wordy in their implementation.
Members of the Test Engineering team have written and use many different fixtures, and for different purposes.
A simple example is the base_url fixture. This fixture pulls the
base_url setting from a config file, such as
tox.ini, and uses it for selenium-based tests.
This removes the need to either specify the URL every time the tests are run, or to hard-code it within your tests (which is generally recommended against).
A more complex example is the selenium fixture.
Instantiating a WebDriver instance requires a few lines of code:
from selenium import webdriver
driver = webdriver.Firefox()
This same task can be implemented simply by passing the
selenium fixture as a parameter in your test function:
assert "Python" in selenium.title
(This example assumes that
base_url is set to
http://www.python.org in your config file.)
This implementation also does not require closing the WebDriver instance at the conclusion of the test;
pytest-selenium will do this for you when the test ends.
The fixture that I wrote simply creates an instance of the Axe class, using a
When running tests locally, I had my fixture within the
If users do want to use the axe fixture, I didn't want them to have to manually modify their
So, I wrote a very simple plugin, pytest-axe, to enable the use of this fixture.
Sometimes the fixtures makes testing a little more simple, but there are some tasks that can't be accomplished when using fixtures.
Another thing I have been struggling with is whether or not I should be using pytest-selenium at all.
I went back and forth with this a bit in the beginning. For the sake of time, I decided to proceed with pytest-selenium.
It really isn't possible to know what users will want at this point, so instead of trying to produce something perfect from the beginning, my focus is to produce something usable.
As I said, this week I have been focused on running my tests in a Jenkins environment.
This should help me to make more informed decisions on my implementation.
Currently, the test suite I have been working with is mozillians-tests.
This is a series of tests for the public Mozilla phonebook, a directory of Mozilla employees and contributors.
I am experimenting with using an all-in-one test & report vs. a set of individual rule tests.
While a single test would still provide helpful feedback, there are a couple of issues with this approach.
If a single accessibility rule is violated, the test is marked as a failure.
There is also no way to
xfail individual rules.
xfailis a decorator to indicate an expected failure. This allows test suites to return an OK until the problem is fixed. Once the test starts passing again, a flag is raised to the test team, signalling that the test was expected to fail, but is now passing.
So it definitely seems like individual rule tests are the way to go. This is a bit more difficult to implement.
Here are my tests for the accessibility rules.
I have been playing with different approaches to accomplish this goal.
Considering there are only a couple of weeks left of this internship, solving this problem is the highest priority at the moment.
None of these approaches are particularly pretty at the moment, but I'm confident that I'll have a more usable and stable implementation by the end of this week.