a feature is to be considered in 'critical path' if it would require a rollback after breaking
- the feature overlaps with critical path
- product and managment can help determine this designation
- AND when unit tests alone are not sufficient or testing an interaction
- when the feature does NOT overlap with critical path
- AND when the feature does not involve multiple units
- AND when the feature lacks coverage
support paradmign shift with visible coverage metrics on both testing strategies so it is obvious we are removing duplicity rather than blindly deleting e2e tests
There are exceptions to everything. Any of these rules can be disregarded if they are somehow defined to be apart of the critical path. Here is a list of our most common e2e smells:
- testing something outside of critical path
- e2e is expensive in all facets, their benefits cease under excessive quanitity and should be used sparingly with developers favoring unit coverage
- writing e2e tests for every new feature or experiment
- unit tests disguised as e2e
- dom checks, attribute checks, render checks, html checks, these can all be done at the unit level
- testing anchor tags take you to the correct pages when clicked or implicit browser behaviors
- testing pages for seo content like title tags and meta tags (unit tests)
- testing user flows that a user is not likely to go through themselves
- testing something that is not interactive
- testing static content
- testing multiple disconnected systems
- (eg start on homepage, use searchbar to perform a search, submit a form, verify post form submission flow)
- examples like the ones listed above could be treated instead as independent end to end tests
If you see any of theese patterns in existing PR's, ask that they be moved to unit tests
If you are authoring a PR on an existing feature, ensure adequate unit coverage and remove unnecessary e2e testing
Questions:
-
Who is going to define the Critical Path (CP). Is it Product and Management?
- yes, but mostly product. i expect management to seek additional e2e coverage for hot fixes or poorly architected experiences for extra confidence (eg issue specific to IE or safari cannot always be addressed in unit test)
-
"If a feature overlaps with CP, PO and management will determine this designation". Do we take input from automation engineers? If not, do they really need to attend scrum meetings such as architecture meetings?
-
yes, but if the requirement of an automation test boils down to overlapping with CP, the value provided by automation would be technical in nature
-
I believe automations participation in architecture could help bridge the gaps in knowledge of available testing strategies for both teams
-
less end to end tests also mean automation team can focus on improving the performance and durability of our stack or entertain new and useful ways to write automation tests
-
-
Who decides when to write unit tests? Dev? or is that determined by PO and management?
-
unit tests are mandatory for devs now as we speak but currently only enforced through PR review and it's subjective in nature
-
we plan on supplmenting this responsibility with istanbul so its obvious if a PR is lacking appropriate coverage and less subjective
-
-
When do we delete e2e tests? After Dev adds unit tests that suffice for adequate coverage? And Who decides that besides dev? PO and management?
-
we need a strategy mapped out for removing non-critical e2e tests. it will require more planning than we can do here in this email
-
i think it will be easier for us to designate tests as 'critical path', then to designate them as the opposite
-
non-critical e2e tests should not be removed without ensuring appropriate unit test coverage for the feature in question
-
-
Do we need automation review if a PR does not have e2e tests?
- at first i think yes but I suspect less and less over time. if we are emphasizing unit coverage for non-critical path features, feedback can be provided there. participation at the PR level also helps to bridge gaps in knowledge between testing strategies.
-
When a hotfix issue and/or critical bug is found, how do we give confidence to manual testers that it is/was covered with unit tests?
- my suggestion is to take advantage of retros or post mortems so we can analyze the situation together as a group, every hot fix is likely to arise for different reasons
-
What is the plan to educate the testing team to bring confidence in unit testing?
-
i'll need more time to think about this but it would involve familiary with coverage tooling and our tech stack, mostly react and enzyme
-
unit tests, like e2e tests, are designed to be read as english, so even if you arent familiar with the stack, its easy to get a sense of what the test does
-
-
PO and management may not be in every architecture meeting. That could be an issue with determining e2e test coverage for the feature.
-
product writes the stories the majority of the time, for these cases they can specify the story as critical path or not
-
i expect featureflips will rarely ever be considered for critical path because they can be turned off instead of rolling back
-
this moves the conversation to taking feature flips permanent and adding accompanying e2e if they are deemed critical path
-
-
Currently automation engineers work closely with the manual QA team on what is not covered by e2e. So that it can be covered by the manual QA team during regression. In this scenario, how will we communicate the unit test coverage when we have more unit tests and reduce e2e tests?
-
we slowly tip the balance of e2e and unit, exposing the weight of each stragegy through coverage metrics
-
automation will need to know where features are located in the code base to ensure they have accompanying tests
-
(for example if you see LeadForm.tsx was modified, you might expect LeadForm-unit-test.tsx to be modified in a pr as well)
-
the coverage tooling will also convey this information
-
-
If a bug is found during regression, who is going to help the manual QA team look up if it's covered with unit tests? (That happens quite often)
-
in many of these cases, if a bug is found during regression this means it is NOT covered by unit tests or else it wouldnt be in prod. occasionaly a certain browser might act up which cannot be adequately tested at the unit level. engineering team can help shed light on this, and my suggestion is to work with the hot fixer which will probably be a senior or lead dev
-
my opinion is that the hot fixer should be responsible for adding a unit test, although in practice, the business is usually fonder of rushing a fix than a test
-
-
Critical and/or hotfix bugs, making sure we have enough tests, so we don't experience that again. Also, responsibility and ownership.
- see my last statement above