Skip to content

Instantly share code, notes, and snippets.

@adamf321
Last active August 4, 2016 05:49
Show Gist options
  • Save adamf321/3524dd0ccc4d31e26bd3fd16335f7b15 to your computer and use it in GitHub Desktop.
Save adamf321/3524dd0ccc4d31e26bd3fd16335f7b15 to your computer and use it in GitHub Desktop.

Moxie QA Lead Test Project

Hi there, thanks for your interest in working with Moxie! We have devised the following test to give you the opportunity to show us what you can do.

Overview

Take the following mobile design: https://invis.io/AX6SXHN2F

Devise a test plan and workflow for this website. Take into account the following:

  • A fully responsive site will be built, working from high-res desktops down to small mobile devices.
  • Only consider the login, register and lost password funcions, and the feed screen.
  • The site will be built as an SPA using Angular. It will use a WordPress backend which will expose data via the WP-API.
  • Development is done using an Agile process with 1-week sprints.
  • Your test plan should test for pixel perfect implementation as well as functional tests.

Deliverables:

  • A test plan showing all the relevant cases. Use whichever format you think best.
  • An explanation of the whole testing lifecycle and how testing fits into the project workflow.

Bonus Points

  • Explain how automated testing could be used and the benefits.
@mzilich
Copy link

mzilich commented Aug 3, 2016

NEWSapp - A Sample Test Strategy and Plan Outline
(see attached MindMap) for Moxie, by Michael Ilich, July 2016
Strategy
• Manual Browser Testing methodologies will be used for all feature testing, including functional and user experience

  • Includes use of Debuggers, Cross-Browser testing tools, and other Greybox tools like MySQL, Charles, etc.
  • Desktop Testing (MacOS/Windows, Chrome/Firefox/Safari/IE
  • Mobile Testing (iOS/Android, Safari/Chrome/Firefox)
  • "COP FLUNG GUN" Techniques (Communication, Orientation, Platform, Function, Location, User Scenarios, Network, Gesture, Guidelines, Updates, Notifications;
    see http://moolya.com/uncategorized/test-mobile-applications-with-cop-who-flung-gun/)
    • Performance and Load Testing will focus on both the Client-Side and Server-Side (API), to account for use of Javascript and Ajax on the Client side
    • Non-manual testing (aside from login/out, registration, and basic navigation) will be prioritized for Automation first, as it will be most feasible, and thus offer the most time-saving rewards
    • Manual testing scripts will be automated using a program like Selenium Webdriver, or any other tool(s) specific to the Wordpress API (WP-API) that might be available. After the feature achieves stability over
    multiple iterations and the Product Team confirms there are no functionality changes ahead in the Product Roadmap, more test manual test cases may be scripted for time gains in future Sprints.

Test Plan
NEWSapp

⁃ Introduction
⁃ Personalized, Real-Time News Service Application
⁃     Audience
⁃         Registered Users
⁃             Email Signup
⁃         Returning Users
⁃             Login Credentials
⁃ Fully Responsive Site
⁃     Technical Support
⁃         Operating Systems
⁃             Desktop
⁃                 MAC OS
⁃                 Windows
⁃             Mobile
⁃                 iOS
⁃                 Android
⁃         Browsers
⁃             Chrome
⁃             Firefox
⁃             Safari
⁃         Basic Email Support 
⁃             Registration Validation
⁃             Password Retrieval
⁃     Technical Specs
⁃         Front-End
⁃             SPA (AngularJS)
⁃         Back End
⁃             Wordpress (WP-API)
⁃ Testing Scope
⁃ Product
⁃     A/B Testing
⁃ Technical
⁃     Unit Tests
⁃     Load Tests
⁃     Performance Tests
⁃     Popular Browser Tests (e.g Safari, Chrome, Firefox, IE)
⁃         Desktop
⁃         Mobile
⁃     Cross-Browser Tests (i.e. strings of sessions in different browsers)
⁃         Desktop
⁃         Mobile
⁃     Session Management Tests
⁃         Device Management test cases
⁃             Desktop
⁃             Mobile
⁃         Connectivity test cases
⁃             Offline testing
⁃             Reconnecting to network testing
⁃         Concurrent Session Tests
⁃             Desktop Browsers
⁃             Mobile Browsers
⁃     Automated Tests
⁃         All tests that can be successfully scripted and executed reliably 
⁃             Unit test cases
⁃             Load test cases
⁃             Performance test cases
⁃             Cross-Browser test cases
⁃             Session Management test cases
⁃             Functional (Positive/Negative)
⁃                 Login/Logout
⁃                 Site Navigation
⁃         Automation tools/methodologies specific to the WordPress API (WP-API) are preferable
⁃     Back End Testing (White Box)
⁃         API (Client/Server) 
⁃ Basic User Flow
⁃     Functional 
⁃         Register
⁃             Positive/Negative test cases
⁃             Boundary test cases
⁃             Email Validation test cases
⁃         User Session Management
⁃             Login/Logout
⁃         Password Recovery
⁃         Newsfeed
⁃             Newsfeed Navigation
⁃                 News Items
⁃                 Search
⁃                 Settings
⁃                 Site Preferences
⁃                     Bookmarks
⁃                     "Loves"
⁃ Exceptions
⁃     What won't be tested
⁃         Content validation tests 
⁃             News Items
⁃             Settings
⁃             Site Preferences
⁃                 Bookmarks
⁃                 "Loves"
⁃             Search Results
⁃         Database tests beyond validation of storage and updates of user credentials and site preferences
⁃         Integration tests with any supporting platforms
⁃         Any Product Versions embedded in Native Applications (i.e. full "builds" that require end-to-end regression testing not amenable to 1 week Sprints)
⁃ Project Teams: Domains and [Tools]
⁃ Developers Domain
⁃     [Confluence, Jira, Chat/Email Apps, and countless others!]
⁃     Unit Testing
⁃     Load Testing
⁃     Performance Testing
⁃     Session Management (Technical/White Box)
⁃         API (Client/Server) 
⁃     Environment Management
⁃     Version Management
⁃ QA Domain
⁃     [MindMap tools, Confluence, Chat/Email apps, TestRail, TestRun, Jira, Pivotal Tracker, Chrome Developer Tools, Firebug, Charles, Xcode, JUnit, Jenkins, Selenium etc.]
⁃     Test Strategy
⁃         Test Plan
⁃             Test Cases
⁃                 Functional
⁃                 User Session Management (Black/Grey Box)
⁃                 Email Validation
⁃                 Device Management (Black Box)
⁃                 A/B Tests
⁃         Defect/Bug Management
⁃             Scripted Test case execution 
⁃             Exploratory test case execution
⁃             Defect/Bug Fix Validation
⁃             Regression/Smoke Testing
⁃         Test Completion Criteria
⁃             Work with Product Team to identify "core" test cases to validate Exit Criteria
⁃             "Test Runs" (sets of test cases) organized according to priority (Exit Criteria)
⁃                 Which/How many test cases have passed/failed for the most current version in the Sprint?
⁃                     Which processed bugs/defects have been designated for fix in future Sprints?
⁃             Work with Developers to compile testing results for the Backend/API
⁃ Product Domain
⁃     [Confluence, Jira, Calendar apps, Chat/Email apps, etc.]
⁃     Product Specifications (formal, e.g. in a document, or informal, e.g. via Q&A)
⁃         User Flow Expectations
⁃             Registration
⁃             Email Validation
⁃             Login/Logout
⁃             User Session 
⁃             Password Retrieval
⁃         UI Design Coordination
⁃             Desktop Dimensions
⁃             Mobile Dimensions
⁃         Cataloguing of all UI Functionality
⁃             Desktop mouse-clicks and navigation
⁃             Mobile finger-taps and gestures
⁃         Definition of Product Support Levels
⁃             What will/won't be guaranteed Product support
⁃             Basic Technical Requirements
⁃             Technical Support Levels
⁃         Prioritization of Product Needs
⁃             "Must-have" criteria
⁃             "Nice to have" criteria
⁃             Definition of acceptable standards of product quality for various stages of release (Exit Criteria)
⁃                 Defect/Bug Analytics review prior to release
⁃                     Discerning which defects/bugs can be "tolerated" in this iteration, with their fix priority postponed for a future Sprint
⁃                 Review of Test Case Results
⁃                     Which/How many test cases are core and must pass for validation of the release
⁃         Marketing Campaigns
⁃             A/B Testing Projects
⁃     Iteration Planning, Design and Management
⁃         Sprint Planning, Design and Management
⁃ QA Deliverables [format]
⁃ Product Specification Review Notes [Confluence]
⁃     QA Expectations, Assumptions and Questions for Clarification
⁃ Test Strategy [Mindmap and Outline]
⁃ Test Plan [Mindmap and Outline]
⁃     Master Suite of Test Cases ["Singular-result" format for product spec validation and future script automation]
⁃         Sub-suite of Smoke test cases for basic validation of functionality and user flow
⁃ Testing Results [Test Rail/Test Run Reports, Jira tickets, Powerpoint, etc]
⁃     Iteration-Specific
⁃         "Test Runs" of Master test case suite
⁃         "Test Runs" of Smoke test case suites
⁃         Defect/Bug Reports
⁃         Back-End/API Testing Results
⁃         Post-Iteration Notes
⁃     Sprint-Specific
⁃         "Test Runs" of Master test case suite
⁃         "Test Runs" of Smoke test case suites
⁃         Defect/Bug Reports
⁃         Back-End/API Testing Results
⁃         Post-Sprint Notes
⁃ Team Standards and Project Ownership
⁃ Developers
⁃     Organize/Maintain Code Branches
⁃     Devise Code Check-In/Release Schedules
⁃         Arrange special deployment schedules and/or configurations for features/stories requiring successive and sequential Sprints to complete
⁃     Run Unit tests on code ready for check-in
⁃     Build and Maintain Test Environments
⁃         Perform Data Migration when needed for User Testing
⁃         Perform Load Testing 
⁃         Conduct Performance Testing
⁃         Work with White Box QA staff (if available) to automate tasks
⁃         Configuration/Settings Management (if needed)
⁃     Work with Development and QA teams to Manage/Complete Iterations/Sprints
⁃         Defect/Bug Triage Fixes
⁃         Code Release Requirements
⁃         Post-Release Requirements
⁃         Review/Critique Testing Deliverables
⁃         Review all Defect/Bug Reports immediately
⁃         Submit Code Fixes for Defects/Bugs according to Schedule and/or Product's Priority scale and/or Severity
⁃         Update Project Team of any potential Delays to Schedule
⁃ QA 
⁃     Prepare/Maintain Testing Resources
⁃         Testing tools configured properly for testing environment(s)
⁃         User test accounts available, if needed
⁃         Other Assets established, if needed
⁃         Work with Developers to Prepare/Maintain Testing Environment(s)
⁃             Deploy new versions
⁃             Migrate/Configure data 
⁃             Configure parameters for environment stability
⁃     Devise/Maintain Test Strategy, Plan and Cases
⁃         Drafts are begun as soon as Product Specifications and/or Test Builds are Updated/Deployed
⁃         Identify "Core" Test Cases to fulfill Exit Criteria set by Product Team
⁃     Begin Testing as soon as Test Environment is deemed fit and stable
⁃     Prioritize defects/bugs found according to Severity
⁃         P1/Showstopper
⁃             Bug/Defect causes App to "crash" or "hang"; data and/or content is permanently corrupted or lost; central user flow is blocked/disrupted, without simple workaround options
⁃         P2/Major
⁃             Bug/Defect causes temporary corruption/loss of data and/or content; partially blocks a user flow, but simple workaround options are available
⁃         P3/Average
⁃             Bug/Defect causes an error or unexpected result with any primary functionality; temporarily disrupts a user flow, without blocking it in any way; distorts the display of data and/or content
⁃         P4/Minor
⁃             Bug/Defect is a cosmetically unappealing or inaccurate display of design; unexpected, but acceptable results for all non-essential functionality
⁃     Keep Developer and Product Teams abreast of testing results per current version, with high severity issues prioritized 
⁃     Verify Fixes to Defects/Bugs ASAP
⁃         Perform Smoke tests to ensure no further regressions in surrounding areas
⁃     Work with Developers to help with/automate White Box Testing
⁃     Work with Development and Product Teams to Manage/Complete Iterations/Sprints
⁃         Report/Escalate any Regressions in New Builds
⁃             Discuss Code Reversion Strategies to meet Sprint goals
⁃                 Adjust Test Plan accordingly for new set of "core" test cases
⁃         Update Project Team of any potential Delays to Schedule
⁃         Defect/Bug Triage Testing and Fix Verification
⁃     Exit Criteria Validation Requirements
⁃         Confirm passage of all core test cases and defect/bug fix verification
⁃         Regression/Smoke Testing (on Staging Environment, if available)
⁃         QA Team "Signs off" on final validation of Exit Criteria
⁃     Post-Release Requirements
⁃         Smoke Testing on Production Environment
⁃         Sprint/Iteration Analytics Reporting
⁃             Code Change Overview
⁃             Stories Completed
⁃             Test Cases Passed/Failed
⁃             Defects/Bugs Found/Fixed
⁃             Performance and Load testing review
⁃             Automation results/gains (if applicable)
⁃ Product
⁃     Devise/Maintain Product Specifications
⁃     Review/Critique Testing Deliverables
⁃     Set Schedules for Iterations and Sprints
⁃     Assign Prioritization rating scale (e.g. P1-P4) to "must haves" and "nice to haves"
⁃     Work with Development and QA teams to Manage/Complete Iterations/Sprints
⁃         Update Project Team of any potential Delays to Schedule
⁃         Work with other Project Managers to coordinate "Project Verticals" and their respective place on the Product Roadmap
⁃     Schedule/Facilitate Iteration/Sprint Planning meetings
⁃         Iteration Planning "Kickoffs"
⁃         Daily Stand-ups (if needed)
⁃         Post-Release Review (if needed)
⁃ Schedules
⁃ Iteration Planning
⁃     Product Roadmap and/or Story/Bug/Task Backlog are parsed into "sets" or Iterations to deliver new features, upgrades, and bug fixes
⁃         Product Team works with Development and QA Teams to write necessary stories/tasks, preliminary development and testing plans
⁃         Iterations are composed of X number of 1-week Sprints, as devised/agreed upon by representatives of the Development (including IT), Product and QA teams 
⁃             Sprint Planning
⁃                 Development tasks are parsed as such so that they can be integrated into a stable version (i.e. pass unit tests), and can be tested and verified within 5 days
⁃                     Any Development work to be released in specific sequence must be organized for testing and release in the desired sequential order respective to Sprint duration
⁃                 Story Backlog is parsed according to Product's Roadmap, with selections estimated just above the capacity for a one week Sprint, in case tasks are finished early and more throughput is possible
⁃                     Singular Stories and their associated tasks Not Linked to other work will be prioritized
⁃                         Unfinished stories/tasks will be moved to future Sprints
⁃                     Interdependent Stories and tasks will be parsed as part of sets, spread out over multiple Sprints
⁃                         "Feature" Sprints composed of only interdependent stories/tasks that don't meet Exit Criteria will be postponed to the next week's Release date
⁃                 Pre-existing defects affecting relevant functionality identified and accounted for
⁃                 Possible gains from Automation (if applicable)
⁃ Sprint Execution
⁃     QA Team begins first cycle of testing, executing all core test cases 
⁃         Failed core test cases are reported as defects/bugs with critical or major severity
⁃             Any regressions to existing functionality are reported as defects/bugs with critical or major severity
⁃         Developer fixes to defects/bugs checked-in and deployed are verified immediately
⁃             Core test cases are re-run to pre-validate Exit Criteria
⁃                 Automation of tasks applied appropriately (when applicable)
⁃     QA Team begins second cycle of testing, executing all supplemental tests cases
⁃         Any regressions to existing functionality are reported as defects/bugs with critical or major severity
⁃         Failed supplemental test cases are reported as defects/bugs with average or minor severity
⁃         Developer fixes to defects/bugs checked-in and deployed are verified immediately
⁃             All test cases are re-run to validate Exit Criteria
⁃                 Automation of tasks applied appropriately (when applicable)
⁃     QA Team runs final Smoke Tests for build validation
⁃         Any defects/bugs found will be communicated with Product and Development Teams for immediate consideration with regards to the Quality of the Release
⁃             Defects/bugs deemed to be regressions or failed core test cases may determine that the Release date must be moved to the next Sprint
⁃ Product Release
⁃     After QA Validation of the Sprint, the changes are Released to the Production Environment
⁃         QA Team will perform Smoke Tests on Production Environment to validate the Release
⁃             Automation of tasks applied appropriately (when applicable)
⁃ Post-Sprint Analysis (Optional)
⁃     QA Team prepares Sprint Release Notes 
⁃         Test Strategy Success
⁃         Test Plan Review
⁃             Core Test Case Stats
⁃             Supplemental Test Case Stats
⁃             Defect/Bug Notes
⁃             Automation Stats (if available)
⁃         Outstanding Issues and Implications for Future Sprints
⁃     All Teams reconvene to discuss Sprint Release Notes 
⁃         Team Velocity is calculated
⁃         Lessons Learned/Suggestions for Future Sprints are recorded
⁃         Product Team presents Status Report on Team Velocity
⁃ Post-Iteration Analysis (Optional)
⁃     QA Team prepares Iteration Release Notes 
⁃         Test Strategy Success
⁃         Test Plan Review
⁃             Core Test Case Stats
⁃             Supplemental Test Case Stats
⁃             Defect/Bug Notes
⁃             Automation Stats (if available)
⁃         Outstanding Issues and Implications for Future Iterations
⁃     All Teams reconvene to discuss Iteration Release Notes 
⁃         Team Velocity is calculated
⁃         Lessons Learned/Suggestions for Future Iterations are recorded
⁃         Product Team presents Status Report on Product Roadmap
⁃ Risks
⁃ Tight Testing Schedule
⁃     Easily Disrupted by Delays
⁃         Design/Product task delays
⁃         Developer task delays
⁃         Operations task delays
⁃             Delays in stabilizing Test Environment
⁃             Delays in needed Data Migration/Setup
⁃     Subject to Changes to Product Requirements
⁃     Extended by Automation of Test Scripts 
⁃         Early Time added can result in Time gained in future Sprints/Iterations
⁃     Option: Inserting "Buffer" time for delays and adjustments to Sprints is sometimes helpful to meeting release goals
⁃ Resources
⁃     Each Team has necessary members available to fulfill scope for Sprint
⁃     Each Team Member has tools necessary to complete tasks and functions
⁃ Defects
⁃     Pre-existing defects must be identified and accounted for in Sprint planning
⁃     Critical and Major Defects must be identified/resolved immediately
⁃     All Regressions to existing functionality must be identified/resolved in timely manner
⁃ Unforeseen Circumstances
⁃     "Natural Disasters"
⁃ Human Error
⁃     Miscalculations
⁃     MisReporting
⁃     MisInterpretation
⁃     Plain old mistakes :)

@mzilich
Copy link

mzilich commented Aug 4, 2016

newsapp sample test mindmap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment