To run browser tests, open a new Terminal window or tab and change to the project directory, then tell gulp to start the tests:
gulp build
gulp test:acceptance ( tox -e acceptance can be run as well )
There are several options you can pass to run a particular suite of tests, to run a particular list of features, and/or to run it in "fast" mode:
gulp test:acceptance --suite=wagtail-admin ( runs just the wagtail-admin suite )
gulp test:acceptance --specs=multi-select.feature ( runs just the multi-select feature )
gulp test:acceptance --tags=@mobile ( runs all scenarios tagged with @mobile )
gulp test:acceptance --fast ( runs the tests without recreating the virtual environment )
The same options can be used with tox (--omitted):
tox -e acceptance suite=wagtail-admin
tox -e acceptance specs=multi-select.feature
tox -e acceptance tags=@mobile
tox -e acceptance-fast
These tests will run on their own server; you do not need to be running your development server.
Below are some suggested standards for Cucumber Feature files:
Table Copied from https://saucelabs.com/blog/write-great-cucumber-tests by Greg Sypolt, with moderate modifications
Feature Files | Every *.feature file consists in a single feature, focused on the business value. |
Gherkin |
Feature:Title (one line describing the story) Narrative Description: As a [role], I want [feature], so that I [benefit] |
Given, When, and Then Statements |
There might be some confusion surrounding where to put the verification step in the Given, When, Then sequence. Each statement has a purpose.
Just remember the ‘then’ step is an acceptance criteria of the story. |
Background | The background needs to be used wisely. If you use the same steps at the beginning of all scenarios of a feature, put them into the feature’s background scenario. The background steps are run before each scenario.
Background: Given I am logged into Wagtail as an admin And I create a Wagtail Sublanding Page And I open the content menu |
Scenarios | Keep each scenario independent. The scenarios should run independently, without any dependencies on other scenarios. Scenarios should be between 5 to 6 statements, if possible. |
Scenario Outline | If you identify the need to use a scenario outline, take a step back and ask the following question: Is it necessary to repeat this scenario ‘x’ amount of times just to exercise the different combination of data? In most cases, one time is enough for UI level testing. |
Declarative Vs Imperative Scenarios |
The declarative style describes behavior at a higher level, which improves the readability of the feature by abstracting out the implementation details of the application. The imperative style is more verbose but better describes the expected behavior. Either style is acceptable. Example: Imperative Scenario: User logs in Given I am on the homepage When I click on the "Login" button And I fill in the "Email" field with "[email protected]" And I fill in the "Password" field with "secret" And I click on "Submit" Then I should see "Welcome to the app, John Doe"Example: Declarative Scenario:User logs in Given I am on the homepage When I log in Then I should see a login notification |
!!! danger The instruction for automatically running Sauce Connect from gulp are not working. See cfpb/consumerfinance.gov#2324
Sauce Labs can be used to run tests remotely in the cloud.
-
Log into http://saucelabs.com/account.
-
Open a new Terminal window or tab and navigate to the downloaded SauceConnect folder. If you place the folder in your Application's folder this might look like:
cd /Users/<YOUR MAC OSX USERNAME>/Applications/SauceConnect
-
Copy step 3 from the the SauceLabs Basic Setup instructions and run that in your Terminal window. Once you see
Sauce Connect is up
in the Terminal, that means the tunnel has successfully been establishedThe Terminal command should already have your Sauce username and access key filled in. If it doesn't, make sure you're logged in.
-
Update and uncomment the
SAUCE_USERNAME
,SAUCE_ACCESS_KEY
, andSAUCE_SELENIUM_URL
values in your.env
file. The access key can be found in lower-left on the Sauce Labs account profile page. -
Reload the settings with
cd .. && cd cfgov-refresh
. Typey
if prompted. -
Run the tests with
gulp test:acceptance
.!!! Note: If you want to temporarily disable testing on Sauce Labs, run the command as:
gulp test:acceptance --sauce=false
. -
Monitor progress of the tests on the Sauce Labs dashboard Automated Tests tab.
!!! Note
If you get the error Error: ENOTFOUND getaddrinfo ENOTFOUND
while running a test, it likely means that Sauce Connect is not running.
See step 4 above.
A number of command-line arguments can be set to test particular configurations:
--suite
: Choose a particular suite or suites to run. For example,gulp test:acceptance --suite=content
orgulp test:acceptance --suite=content,functional
.--specs
: Choose a particular spec or specs to run. For example,gulp test:acceptance --specs=contact-us.js
,gulp test:acceptance --specs=contact-us.js,about-us.js
, orgulp test:acceptance --specs=foo*.js
. If--suite
is specified, this argument will be ignored. If neither--suite
nor--specs
are specified, all specs will be run.--windowSize
: Set the window size in pixels inw,h
format. For example,gulp test:acceptance --windowSize=900,400
.--browserName
: Set the browser to run. For example,gulp test:acceptance --browserName=firefox
.--version
: Set the browser version to run. For example,gulp test:acceptance --version='44.0'
.--platform
: Set the OS platform to run. For example,gulp test:acceptance --platform='osx 10.10'
.--sauce
: Whether to run on Sauce Labs or not. For example,gulp test:acceptance --sauce=false
.
Tests are organized into suites under the test/browser_tests/cucumber/features
directory. Any new tests should be added to an existing suite (e.g. "default"), or placed into a new suite directory. All tests start with writing a .feature
spec in one of these suites, and then adding corresponding step definitions, found in test/browser_tests/cucumber/step_definitions
.
- Cucumber features
- Protractor
- Select elements on a page
- Writing Jasmin expectations.
- Understanding Page Objects
To audit if the site complies with performance best practices and guidelines,
run gulp test:perf
.
The audit will run against Google's PageSpeed Insights.
To run the the full suite of Python 2.7 unit tests using Tox, cd to the project
root, make sure the TOXENV
variable is set in your .env
file and then run
tox
If you haven't changed any installed packages and you don't need to test all migrations, you can run a much faster Python code test using:
tox -e fast
To see Python code coverage information, run
./show_coverage.sh
Run the acceptance tests with an --a11y
flag (i.e. gulp test:acceptance --a11y
)
to check every webpage for WCAG and Section 508 compliancy using Protractor's
accessibility plugin.
If you'd like to audit a specific page, use gulp test:a11y
:
- Enable the environment variable
ACHECKER_ID
in your.env
file. Get a free AChecker API ID for the value. - Reload your
.env
withsource ./.env
while in the project root directory. - Run
gulp test:a11y
to run an audit on the homepage. - To test a page aside from the homepage, add the
--u=<path_to_test>
flag. For example,gulp test:a11y --u=contact-us
orgulp test:a11y --u=the-bureau/bureau-structure/
.
The default test task includes linting of the JavaScript source, build,
and test files.
Use the gulp lint
command from the command-line to run the ESLint linter,
which checks the JavaScript against the rules configured in .eslintrc
.
See the ESLint docs
for detailed rule descriptions.
There are a number of options to the command:
gulp lint:build
: Lint only the gulp build scripts.gulp lint:test
: Lint only the test scripts.gulp lint:scripts
: Lint only the project source scripts.--fix
: Add this flag (likegulp lint --fix
orgulp lint:build --fix
) to auto-fix some errors, where ESLint has support to do so.