Hi there, thanks for your interest in working with Moxie! We have devised the following test to give you the opportunity to show us what you can do.
Take the following mobile design: https://invis.io/AX6SXHN2F
Devise a test plan and workflow for this website. Take into account the following:
- A fully responsive site will be built, working from high-res desktops down to small mobile devices.
- Only consider the login, register and lost password funcions, and the feed screen.
- The site will be built as an SPA using Angular. It will use a WordPress backend which will expose data via the WP-API.
- Development is done using an Agile process with 1-week sprints.
- Your test plan should test for pixel perfect implementation as well as functional tests.
- A test plan showing all the relevant cases. Use whichever format you think best.
- An explanation of the whole testing lifecycle and how testing fits into the project workflow.
- Explain how automated testing could be used and the benefits.
NEWSapp Test Plan
Table of Contents
1.1. Purpose 3
1.2. Project Overview 3
1.3. Audience 3
2.1. Test Objectives 4
2.2. Test Assumptions 4
2.3. Test Principles 5
2.4. Data Approach 6
2.5. Scope and Levels of Testing 6
2.5.1. Automation Test 6
2.5.2. Functional Test 6
TEST ACCEPTANCE CRITERIA 7
TEST DELIVERABLES 7
MILESTONE LIST 8
2.5.3. User Acceptance Test (UAT) 8
TEST DELIVERABLES 8
2.6. Test Effort Estimate 9
3.1. Entry and Exit Criteria 9
3.2. Test Cycles 10
3.3. Validation and Defect Management 10
3.4. Test Metrics 11
3.5. Defect tracking & Reporting 12
4.1. Test Management Tool 13
4.2. Test Design Process 14
4.3. Test Execution Process 15
4.4. Test Risks and Mitigation Factors 16
4.1. Communications Plan and Team Roster 17
4.2. Role Expectations 17
4.2.1. Project Management 17
4.2.2. Test Planning (Test Lead) 17
4.2.3. Test Team 18
4.2.4. Test Lead 18
4.2.5. Development Team 18
1.1. Purpose
This test plan describes the testing approach and overall framework that will drive the testing of the NEWSapp Your Daily News Feed
My document introduces:
• Test Strategy: rules the test will be based on, including the givens of the project (e.g.: start / end dates, objectives, assumptions); description of the process to set up a valid test
• Execution Strategy: describes how the test will be performed and process to identify and report defects, and to fix and implement fixes.
• Test Management: process to handle the logistics of the test and all the events that come up during execution
1.2. Project Overview
NEWSapp is a powerful app providing people with the ability to view relevant information on your news feed with the latest updates on what is going on around you and updating you throughout the day/night so you do not miss a thing.
1.3. Audience
• Project team members perform tasks specified in this document, and provide input and recommendations on this document.
• Project Manager Plans for the testing activities in the overall project schedule, reviews the document, tracks the performance of the test according to the task herein specified, approves the document and is accountable for the results.
• The stakeholders’ representatives and participants may take part in the UAT test to ensure the business is aligned with the results of the test.
• Technical Team ensures that the test plan and deliverables are in line with the design, provides the environment for testing and follows the procedures related to the fixes of defects.
• Business analysts will provide their inputs on functional changes.
2.1. Test Objectives
The objective of the test is to verify that the functionality of NEWSapp
The test will execute and verify the test scripts, identify, fix and retest all high and medium severity defects per the entrance criteria, prioritize lower severity defects for future fixing.
The final product of the test is twofold:
• A production-ready application;
• A set of stable test scripts that can be reused for Functional and UAT test execution.
2.2. Test Assumptions
Key Assumptions
• Production like data required and be available in the system prior to start of Functional Testing
General
• Testing will be carried out once the build is ready for testing
• All the defects would come along with a snapshot
• The Test Team will be provided with access to Test environment via VPN connectivity
• Test case design activities will be performed by QA Group
• Test environment and preparation activities will be owned by Dev Team
• Dev team will provide Defect fix plans based on the Defect meetings during each cycle to plan. The same will be informed to Test team prior to start of Defect fix cycles
• BUSINESS ANALYST will review and sign-off all Test cases prepared by Test Team prior to start of Test execution
• The defects will be tracked through HP ALM only. Any defect fixes planned will be shared with Test Team prior to applying the fixes on the Test environment
• Project Manager/BUSINESS ANALYST will review and sign-off all test deliverables
• The project will provide test planning, test design and test execution support
• Test team will manage the testing effort with close coordination with Project PM/BUSINESS ANALYST
• Project team has the knowledge and experience necessary, or has received adequate training in the system, the project and the testing processes.
• There is no environment downtime during test due to outages or defect fixes.
Functional Testing
• During Functional testing, testing team will use preloaded data which is available on the system at the time of execution.
UAT
• UAT test execution will be performed by end users and QA Group will provide their support on creating UAT script.
2.3. Test Principles
• Testing will be focused on meeting the business objectives.
• There will be common, consistent procedures for all teams supporting testing activities.
• Testing processes will be well defined, yet flexible, with the ability to change as needed.
• Testing activities will build upon previous stages to avoid redundancy or duplication of effort.
• Testing environment and data will emulate a production environment as much as possible.
2.4. Data Approach
• In functional testing, will contain pre-loaded test data and which is used for testing activities.
2.5. Scope and Levels of Testing
2.5.1. Automation
PURPOSE: Used to look inside the application and see memory contents, data tables, file contents, and internal program states to determine if the product is behaving as expected.
SCOPE: We will use existing test scripts in selenium and create new test scripts if needed
TESTERS: Testing team.
METHOD: this automation testing is carried out in the application with test scripts and positive/negative scenarios
TIMING: at the beginning of each cycle.
2.5.2. Functional Test
PURPOSE: Functional testing will be performed to check the functions of application. The functional testing is carried out by feeding the input and validates the output from the application.
Scope: Log in failure, Password Resets, Upgrade Notification, Positive/Negative Testing
TESTERS: Testing Team.
METHOD: The test will be performed according to Functional scripts, which are stored in HP ALM.
TIMING: after Exploratory test is completed.
TEST ACCEPTANCE CRITERIA
TEST DELIVERABLES
S.No. Deliverable Name Author Reviewer
(4. Daily/weekly status report Test Team/ Test Lead Test Lead/ Project Manager
MILESTONE LIST
The milestone list is tentative and may change due to below reasons
a) Any issues in the System environment readiness
b) Any change in scope/addition in scope
c) Any other dependency that impacts efforts and timelines
2.5.3. User Acceptance Test (UAT)
PURPOSE: this test focuses on validating the business logic. It allows the end users to complete one final review of the system prior to deployment.
TESTERS: the UAT is performed by the end users
METHOD: Since the business users are the most indicated to provide input around business needs and how the system adapts to them, it may happen that the users do some validation not contained in the scripts. Test team write the UAT test cases based on the inputs from End user and Business Analyst’s.
TIMING: After all other levels of testing (Exploratory and Functional) are done. Only after this test is completed the product can be released to production.
TEST DELIVERABLES
S.No. Deliverable Name Author Reviewer
3.1. Entry and Exit Criteria
• The entry criteria refer to the desirable conditions in order to start test execution; only the migration of the code and fixes need to be assessed at the end of each cycle.
• The exit criteria are the desirable conditions that need to be met in order proceed with the implementation.
• Entry and exit criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation. All this is input to the project manager for a final “go-no go” decision.
• Entry criteria to start the execution phase of the test: the activities listed in the Test Planning section of the schedule are 100% completed.
• Entry criteria to start each cycle: the activities listed in the Test Execution section of the schedule are 100% completed at each cycle.
Exit Criteria Test Team Technical Team Notes
100% Test Scripts executed
95% pass rate of Test Scripts
No open Critical and High severity defects
95% of Medium severity defects have been closed
All remaining defects are either cancelled or documented as Change Requests for a future release
All expected and actual results are captured and documented with the test script
All test metrics collected based on reports from HP ALM
All defects logged in HP ALM
Test Closure Memo completed and signed off
Test environment cleanup completed and a new back up of the environment
3.2. Test Cycles
o There will be two cycles for functional testing. Each cycle will execute all the scripts .
o The objective of the first cycle is to identify any blocking, critical defects, and most of the high defects. It is expected to use some work-around in order to get to all the scripts.
o The objective of the second cycle is to identify remaining high and medium defects, remove the work-around from the first cycle, correct gaps in the scripts and obtain performance results.
• UAT test will consist of one cycle.
3.3. Validation and Defect Management
• It is expected that the testers execute all the scripts in each of the cycles described above. However it is recognized that the testers could also do additional testing if they identify a possible gap in the scripts. This is especially relevant in the second cycle, when the Business analyst’s join in the execution of the test, since the BUSINESS ANALYSTs have a deeper knowledge of the business processes. If a gap is identified, the scripts and traceability matrix will be updated and then a defect logged against the scripts.
• The defects will be tracked through HP ALM only. The technical team will gather information on a daily basis from HP ALM, and request additional details from the Defect Coordinator. The technical team will work on fixes.
• It is the responsibility of the tester to open the defects, link them to the corresponding script, assign an initial severity and status, retest and close the defect; it is the responsibility of the Defect Manager to review the severity of the defects and facilitate with the technical team the fix and its implementation, communicate with testers when the test can continue or should be halt, request the tester to retest, and modify status as the defect progresses through the cycle; it is the responsibility of the technical team to review HP ALM on a daily basis, ask for details if necessary, fix the defect, communicate to the Defect Manager the fix is done, implement the solution per the Defect Manager request.
Defects found during the Testing will be categorized according to the bug-reporting tool “HP ALM” and the categories are:
Severity Impact
1 (Critical) • This bug is critical enough to crash the system, cause file corruption, or cause potential data loss
• It causes an abnormal return to the operating system (crash or a system failure message appears).
• It causes the application to hang and requires re-booting the system.
2 (High) • It causes a lack of vital program functionality with workaround.
3 (Medium) • This Bug will degrade the quality of the System. However there is an intelligent workaround for achieving the desired functionality - for example through another screen.
• This bug prevents other areas of the product from being tested. However other areas can be independently tested.
4 (Low) • There is an insufficient or unclear error message, which has minimum impact on product use.
5(Cosmetic)
• There is an insufficient or unclear error message that has no impact on product use.
3.4. Test Metrics
Test metrics to measure the progress and level of success of the test will be developed and shared with the project manager for approval. The below are some of the metrics
Report Description Frequency
Test preparation & Execution Status To report on % complete, % Pass, % Fail
Defects severity wise Status – Open, closed, any other Status Weekly / Daily (optional)
Daily execution
status To report on Pass, Fail, Total defects, highlight Showstopper/ Critical defects Daily
Project Weekly Status report Project driven reporting (As requested by PM) Weekly – If project team needs weekly update apart from daily and there is template available with project team to use.
3.5. Defect tracking & Reporting
Following flowchart depicts Defect Tracking Process:
4.1. Test Management Tool
HP Application Lifecycle Management is the tool used for Test Management. All testing artifacts such as Test cases, test results are updated in the HP Application Lifecycle Management (ALM) tool.
• Project specific folder structure will be created in HP ALM to manage the status of this DFRT project.
• Each resource in the Testing team will be provided with Read/Write access to add/modify Test cases in HP ALM.
• During the Test Design phase, all test cases are written directly into HP ALM. Any change to the test case will be directly updated in the HP ALM.
• Each Tester will directly access their respective assigned test cases and update the status of each executed step in HP ALM directly.
• Any defect encountered will be raised in HP ALM linking to the particular Test case/test step.
• During Defect fix testing, defects are re-assigned back to the tester to verify the defect fix. The tester verifies the defect fix and updates the status directly in HP ALM.
• Various reports can be generated from HP ALM to provide status of Test execution. For example, Status report of Test cases executed, Passed, Failed, No. of open defects, Severity wise defects etc.
4.2. Test Design Process
• The tester will understand each requirement and prepare corresponding test case to ensure all requirements are covered.
• Each Test case will be mapped to Use cases to Requirements as part of Traceability matrix.
• Each of the Test cases will undergo review by the BUSINESS ANALYST and the review defects are captured and shared to the Test team. The testers will rework on the review defects and finally obtain approval and sign-off.
• During the preparation phase, tester will use the prototype, use case and functional specification to write step by step test cases.
• Testers will maintain a clarification Tracker sheet and same will be shared periodically with the Requirements team and accordingly the test case will be updated. The clarifications may sometimes lead to Change Requests or not in scope or detailing implicit requirements.
• Sign-off for the test cases would be communicates through mail by Business Analyst’s.
• Any subsequent changes to the test case if any will be directly updated in HP ALM.
4.3. Test Execution Process
• Once all Test cases are approved and the test environment is ready for testing, tester will start a exploratory test of the application to ensure the application is stable for testing.
• Each Tester is assigned Test cases directly in HP ALM.
• Testers to ensure necessary access to the testing environment, HP ALM for updating test status and raise defects. If any issues, will be escalated to the Test Lead and in turn to the Project Manager as escalation.
• If any showstopper during exploratory testing will be escalated to the respective development for fixes.
• Each tester performs step by step execution and updates the executions status. The tester enters Pass or Fail Status for each of the step directly in HP ALM.
• Tester will prepare a Run chart with day-wise execution details
• If any failures, defect will be raised as per severity guidelines in HP ALM tool detailing steps to simulate along with screenshots if appropriate.
• Daily Test execution status as well as Defect status will be reported to all stakeholders.
• Testing team will participate in defect triage meetings in order to ensure all test cases are executed with either pass/fail category.
• If there are any defects that are not part of steps but could be outside the test steps, such defects need to be captured in HP ALM and map it against the test case level or at the specific step that issue was encountered after confirming with Test Lead.
• This process is repeated until all test cases are executed fully with Pass/Fail status.
• During the subsequent cycle, any defects fixed applied will be tested and results will be updated in HP ALM during the cycle.
As per Process, final sign-off or project completion process will be followed
4.4. Test Risks and Mitigation Factors
Risk Prob. Impact Mitigation Plan
SCHEDULE
Testing schedule is tight. If the start of the testing is delayed due to design tasks, the test cannot be extended beyond the UAT scheduled start date. High High • The testing team can control the preparation tasks (in advance) and the early communication with involved parties.
• Some buffer has been added to the schedule for contingencies, although not as much as best practices advise.
RESOURCES
Not enough resources, resources on boarding too late (process takes around 15 days.
Medium High Holidays and vacation have been estimated and built into the schedule; deviations from the estimation could derive in delays in the testing.
DEFECTS
Defects are found at a late stage of the cycle or at a late cycle; defects discovered late are most likely be due to unclear specifications and are time consuming to resolve.
SCOPE
Scope completely defined
Medium Medium Scope is well defined but the changes are in the functionality are not yet finalized or keep on changing.
Natural disasters Low Medium Teams and responsibilities have been spread to two different geographic areas. In a catastrophic event in one of the areas, there will resources in the other areas needed to continue (although at a slower pace) the testing activities.
Non-availability of Independent Test environment and accessibility Medium High Due to non availability of the environment, the schedule gets impacted and will lead to delayed start of Test execution.
Delayed Testing Due To new Issues Medium High During testing, there is a good chance that some “new” defects may be identified and may become an issue that will take time to resolve.
There are defects that can be raised during testing because of unclear document specification. These defects can yield to an issue that will need time to be resolved.
If these issues become showstoppers, it will greatly impact on the overall project schedule.
If new defects are discovered, the defect management and issue management procedures are in place to immediately provide a resolution.
4.1. Communications Plan and Team Roster
4.2. Role Expectations
The following list defines in general terms the expectations related to the roles directly involved in the management, planning or execution of the test for the project.
SN0. Roles Name Contact Info
4.2.1. Project Management
• Project Manager: reviews the content of the Test Plan, Test Strategy and Test Estimates signs off on it.
4.2.2. Test Planning (Test Lead)
• Ensure entrance criteria are used as input before start the execution.
• Develop test plan and the guidelines to create test conditions, test cases, expected results and execution scripts.
• Provide guidelines on how to manage defects.
• Attend status meetings in person or via the conference call line.
• Communicate to the test team any changes that need to be made to the test deliverables or application and when they will be completed.
• Provide on premise or telecommute support.
• Provide functional (Business Analysts) and technical team to test team personnel (if needed).
4.2.3. Test Team
• Develop test conditions, test cases, expected results, and execution scripts.
• Perform execution and validation.
• Identify, document and prioritize defects according to the guidance provided by the Test lead.
• Re-test after software modifications have been made according to the schedule.
• Prepare testing metrics and provide regular status.
4.2.4. Test Lead
• Acknowledge the completion of a section within a cycle.
• Give the OK to start next level of testing.
• Facilitate defect communications between testing team and technical / development team.
4.2.5. Development Team
• Review testing deliverables (test plan, cases, scripts, expected results, etc.) and provide timely feedback.
• Assist in the validation of results (if requested).
• Support the development and testing processes being used to support the project.
• Certify correct components have been delivered to the test environment at the points specified in the testing schedule.
• Keep project team and leadership informed of potential software delivery date slips based on the current schedule.
• Define processes/tools to facilitate the initial and ongoing migration of components.
• Conduct first line investigation into execution discrepancies and assist test executors in creation of accurate defects.
• Implement fixes to defects according to schedule.
Testing will be completed in windows environment with Internet Explorer and with Firefox as well as Google Chrome.