Welcome To SUNTEC Tutorial Point - www.sunteccampus.com

  • SUNTEC

    SUNTEC Computer college

  • Computer Programming

    Computer Programming

  • Web Development

    Web Development

  • Computer Accounting

    Computer Accounting

  • SUNTEC Graphic Designing

    Graphic designing

Home » » Automation Testing

Automation Testing

Automation Testing

Automation testing is given high importance in Scrum teams. The testers devote time in creating, executing, monitoring and maintaining of automated tests and results. As changes can occur any time in scrum projects, testers need to accommodate testing of changed features and also the regression testing involved. Automation testing facilitates managing of test effort associated with the changes. Automated tests at all levels facilitate achieving continuous integration. Automated tests run much faster than manual tests at no additional effort.
The manual testing focuses more on exploratory testing, product vulnerability, predicting defects.

Automation of Testing Activities

Automation of testing activities reduces the burden of repeated work and result in cost savings. Automate
  • Test Data Generation
  • Test Data Loading
  • Build Deployment into Test Environment
  • Test Environment Management
  • Data Output Comparison

Regression Testing

In a sprint, testers test the code that is new / modified in that sprint. However, testers also need to ensure that the code developed and tested in the earlier sprints also is working along with the new code. Hence Regression testing is given importance in scrum. Automated regression tests are run in continuous integration.

Configuration Management

A Configuration management system that uses automated build and test frameworks is used in Scrum projects. This allows to run static analysis and unit tests repeatedly as new code is checked into the Configuration Management System. It also manages continuous integration of the new code with the system. Automated Regression Tests are run during Continuous Integration.
Manual Test Cases, Automated Tests, Test Data, Test Plans, Test Strategy and other Testing Artifacts need to be Version Controlled and require relevant Access Permissions to be ensured. This can be accomplished by maintaining the Testing Artifacts in the Configuration Management System.

Agile Testing Practices

Testers in a Scrum Team can follow the following Agile Practices −
  • Pairing − Two Team Members sit together and work collaboratively. The two people can be two Testers or one Tester and one Developer.
  • Incremental Test Design − Test Cases are developed as the Sprints progress incrementally and User Stories are added up.

Agile Metrics

During software development, collection and analysis of metrics help in improving the process and thereby achieving better productivity, quality deliverables and customer satisfaction. In Scrum based development, this is possible and the testers have to pay attention to the metrics that they need to.
Several metrics are suggested for Scrum development. The significant metrics are −
  • Ratio of Successful Sprints − (Number of successful Sprints / Total number of Sprints) * 100. A Successful Sprint is one in which the Team could meet its commitment.
  • Velocity − A team’s Velocity is based on the amount of Story Points a team earned during a sprint. Story Points are the measure of the User Stories counted during estimation.
  • Focus Factor − (Velocity / Team’s Work Capacity) / 100. Focus Factor is the percentage of the team’s effort that results in finished stories.
  • Estimation Accuracy − (Estimated effort / Actual effort) / 100. Estimation Accuracy is the Team’s ability in estimating the effort accurately.
  • Sprint Burndown − Work (in Story Points or in hours) that is Remaining Vs. Work that needs to be Remaining ideally (as per the Estimation).
    • If it is more, then it means that the Team has taken up more Work than they can do.
    • If it is less, then it means the Team did not Estimate accurately.
  • Defect Count − Number of defects in a Sprint. Defect count is the amount of defects in the software as against the backlog.
  • Severity of Defects − Defects can be categorized as minor, major and critical as per their severity. Testers can define the categorization.

Sprint Retrospectives

In Sprint Retrospectives, all the team members will participate. They share −
  • The things that went well
  • Metrics
  • The scope for improvements
  • Action items to apply

Agile Testing - Methods

In Agile Testing, the commonly used Testing methods are from the traditional practices and are aligned to the principle – Test Early. The Test Cases are written before the code is written. The emphasis is on defect prevention, detection, and removal running the right test types at the right time and at right level.
In this chapter, you will get an understanding of the methods −
  • Test Driven Development (TDD)
  • Acceptance Test Driven Development (ATDD)
  • Behavior Driven Development (BDD)

Test Driven Development

In the Test Driven Development (TDD) method, the code is developed based on the Testfirst approach directed by Automated Test Cases. A test case is written first to fail, code is developed based on that to ensure that the test passes. Method is repeated, refactoring is done through the development of code.
TDD can be understood with the help of the following steps −
  • Step 1 − Write a Test case to reflect the expected behavior of the functionality of the code that needs to be written.
  • Step 2 − Run the test. The test fails as the code is still not developed.
  • Step 3 − Develop code based on the test case.
  • Step 4 − Run the test again. This time, the test has to pass as the functionality is coded. Repeat Step (3) and Step (4) till the test passes.
  • Step 5 − Refactor the code.
  • Step 6 − Run the test again to ensure it passes.
Repeat Step 1 – Step 6 adding test cases to add functionality. The added tests and the earlier tests are run every time to ensure the code is running as expected. To make this process fast, tests are automated.
The tests can be at unit, integration or system level. Constant communication between testers and developers needs to be ensured.

Acceptance Test Driven Development

In the Acceptance Test Driven Development (ATDD) method, the code is developed based on the test-first approach directed by Acceptance Test Cases. The focus is on the acceptance criteria and the Acceptance Test Cases written by the testers during User Story Creation in collaboration with the customer, end users and relevant stakeholders.
  • Step 1 − Write Acceptance Test Cases along with user stories in collaboration with the customer and users.
  • Step 2 − Define the associated acceptance criteria.
  • Step 3 − Develop code based on the acceptance tests and acceptance criteria.
  • Step 4 − Run the acceptance tests to ensure that the code is running as expected.
  • Step 5 − Automate the acceptance tests. Repeat Step 3 – Step 5 until all the user stories in the iteration are implemented.
  • Step 6 − Automate the regression tests.
  • Step 7 − Run the automated Regression Tests to ensure Continuous Regression.

Behavior Driven Development (BDD)

Behavior Driven Development (BDD) is similar to the Test Driven Development (TDD), and the focus is on testing the code to ensure the expected behavior of the system.
In BDD, language like English is used so that it makes sense to the users, testers and developers. It ensures −
  • Continuous communication among the users, testers and developers.
  • Transparency on what is being developed and tested.

Agile Testing - Techniques

Testing Techniques from traditional testing can also be used in Agile testing. In addition to these, Agile specific testing techniques and terminologies are used in the Agile projects.

Test Basis

In agile projects, the product backlog replaces the requirements specification documents. The contents of product backlog are normally user stories. The non-functional requirements also are taken care in the user stories. Thus, the test basis in Agile projects is the user story.
To ensure quality testing, the following can also be considered additionally as test basis −
  • Experience from previous iterations of the same project or past projects.
  • Existing functions, architecture, design, code, and quality characteristics of the system.
  • Defect data from the current and past projects.
  • Customer feedback.
  • User documentation.

Definition of Done

The Definition of Done (DoD) is the criteria that is used in Agile projects to ensure completion of an activity in the Sprint backlog. DoD can vary from one Scrum team to another, but it should be consistent within one team.
DoD is a checklist of necessary activities that ensure implementation of functions and features in a user story along with the non-functional requirements that are part of the user story. A user story reaches the Done stage after all the items in the DoD checklist are accomplished. A DoD is shared across team.
A typical DoD for a user story can contain −
  • Detailed Testable Acceptance Criteria
  • Criteria to ensure consistency of the User Story with the others in the Iteration
  • Specific Criteria related to the Product
  • Functional Behavior Aspects
  • Non-functional characteristics
  • Interfaces
  • Test Data Requirements
  • Test Coverage
  • Refactoring
  • Review and Approval Requirements
In addition to the DoD for User Stories, DoD is also required −
  • at every Level of Testing
  • for each Feature
  • for each Iteration
  • for Release

Test Information

A tester needs to have the following Test information −
  • User Stories that need to be tested
  • Associated Acceptance Criteria
  • System Interfaces
  • Environment where the System is expected to Work
  • Tools availability
  • Test Coverage
  • DoD
In Agile projects, as testing is not a sequential activity and testers are supposed to work in a collaborative mode, it is the tester’s responsibility to −
  • Obtain necessary test information on an ongoing basis.
  • Identify the information gaps that affect testing.
  • Resolve the gaps collaboratively with other team members.
  • Decide when a test level is reached.
  • Ensure appropriate tests executed at relevant times.

Functional and Non-Functional Test Design

In Agile projects, the traditional testing techniques can be used, but the focus is on early testing. Test cases need to be in place before the implementation starts.
For Functional test design, the testers and developers can use the traditional Black Box test design techniques such as −
  • Equivalence Partitioning
  • Boundary Value Analysis
  • Decision Tables
  • State Transition
  • Class Tree
For non-functional test design, as the non-functional requirements are also a part of each user story, the black box test design techniques only can be used to design the relevant test cases.

Exploratory Testing

In Agile projects, time is often the limitation factor for Test Analysis and Test Design. In such cases, Exploratory testing techniques can be combined with the traditional testing techniques.
Exploratory Testing (ET) is defined as simultaneous learning, test design and test execution. In Exploratory Testing, the tester actively controls the design of the tests as they are performed and uses the information gained while testing to design new and better tests.
Exploratory Testing comes handy to accommodate changes in Agile projects.

Risk-Based Testing

Risk-based testing is testing based on the risk of failure and mitigates the risks using test design techniques.
A Product quality risk can be defined as a potential problem with product quality. Product quality risks include −
  • Functional risks
  • Non-functional performance risks
  • Non-functional usability risks
Risk analysis is to be done to evaluate the probability (likelihood) and impact of each risk. Then, the risks are prioritized −
  • High Risks require Extensive Testing
  • Low Risks require only Cursory Testing
Tests are designed using appropriate Test Techniques based on the Risk Level and Risk Characteristic of each Risk. Tests are then executed to mitigate the Risks.

Fit Tests

Fit Tests are automated Acceptance Tests. The Tools Fit and FitNesse can be used for automating acceptance tests.
FIT uses JUnit, but extends the testing functionality. HTML tables are used to display the Test cases. Fixture is a Java class behind the HTML table. The fixture takes the contents of the HTML table and runs the test cases on the project being tested.

Agile Testing - Workproducts

Test Plan is prepared at the time of Release Planning and is revised at every Sprint Planning. Test Plan acts as a guide to the testing process in order to have the complete test coverage.
Typical Contents of a Test Plan are −
  • Test Strategy
  • Test Environment
  • Test Coverage
  • Scope of Testing
  • Test Effort and Schedule
  • Testing Tools
In Agile Projects, all the Team Members are accountable for the quality of the product. Hence, everyone participates in test planning as well.
A testers’ responsibility is to provide necessary direction and mentor the rest of the team with their testing expertise.

User Stories

User Stories are not testing work products in principle. However, in Agile Projects, the testers participate in the User Stories Creation. Testers write User Stories that bring value to the customer and cover different possible behaviors of the system.
Testers also ensure that all the User Stories are testable and ensure the Acceptance Criteria.

Manual and Automated Tests

During the first run of Testing, Manual Tests are used. They include −
  • Unit Tests
  • Integration Tests
  • Functional Tests
  • Non-Functional Tests
  • Acceptance Tests
The Tests are then automated for subsequent runs.
In Test Driven Development, Unit Tests are written first to fail, Code is developed and tested to ensure the Tests pass.
In Acceptance Test Driven Development, Acceptance Tests are written first to fail, Code is developed and tested to ensure the Tests pass.
In other Development methods, the Testers collaborate with the rest of the Team to ensure Test Coverage.
In all the types of methods, Continuous integration takes place, which includes continuous integration testing.
The team can decide when and what tests are to be automated. Even if automation of the tests requires effort and time, the resulting automated tests significantly reduce the repetitive testing effort and time during the iterations of the Agile Project. This in turn facilitates the team to pay more attention to the other required activities, such as new User Stories, Changes, etc.
In Scrum, the iterations are time-boxed. Hence, if a User Story testing cannot be completed in a particular Sprint, the tester can report in the daily standup meeting that the user story cannot reach the Done Status within that Sprint and hence needs to be kept pending to the next Sprint.

Test Results

As most of the Testing in Agile Projects is automated, the Tools generate the necessary Test Results Logs. Testers review the Test Results Logs. The test results need to be maintained for each sprint / release.
A Test Summary can also be prepared that contains −
  • Testing Scope (What was tested and what was not tested)
  • Defect Analysis along with Root Cause Analysis if possible
  • Regression Testing Status after Defect Fixes
  • Issues and the corresponding Resolution
  • Pending Issues, if any
  • Any modifications required in Test Strategy
  • Test Metrics

Test Metrics Reports

In Agile Projects, the Test Metrics include the following for each Sprint −
  • Test Effort
  • Test Estimation Accuracy
  • Test Coverage
  • Automated Test Coverage
  • No. of Defects
  • Defect Rate (No. of Defects per User Story Point)
  • Defect Severity
  • Time to Fix a Defect in the same Sprint (It costs 24x as much to fix a bug that escapes the current sprint)
  • No. of Defects fixed in the same Sprint
  • Completion of Acceptance Testing by Customer within the Sprint

Sprint Review and Retrospective Reports

Testers also contribute to the Sprint Review and Retrospective Reports. The typical contents are −
  • Test Metrics
  • Test Result Logs review results
  • What went right and what can be improved from Testing Point of View
  • Best Practices
  • Lessons Learned
  • Issues
  • Customer Feedback

Agile Testing - Kanban

Agile Testing activities can be managed effectively using Kanban concepts. The following ensure testing to be completed in time within an iteration / sprint and thus focus on the delivery of quality product.
  • User Stories that are testable and effectively sized result in development and testing within the specified time limits.
  • WIP (Work-In-Progress) limit allows to focus on a limited number of user stories at a time.
  • Kanban board that represents the workflow visually, helps to track the testing activities and bottlenecks, if any.
  • Kanban team collaboration concept lets resolution of bottlenecks as they are identified, without wait time.
  • Preparation of Test Cases upfront, maintaining the test suite as the development progresses and obtaining Customer Feedback helps in eliminating Defects within the iteration / sprint.
  • Definition of Done (DoD) is said to be Done-Done in the sense that a Story reaches a completion state only after the testing is also complete.

Testing Activities in Product Development

In Product development, the releases can be tracked with feature Kanban board. Features for a particular release are assigned to the Feature Kanban board that tracks the feature development status visually.
The Features in a release are broken into stories and developed within the release using agile approach.
The following Agile Testing activities ensure quality delivery in every release and at the end of all releases as well −
  • Testers participate in User Story Creation and thus ensure −
    • All the possible Behaviors of the System are captured by means of User Stories and the Non-functional Requirements that are part of the User Stories.
    • User Stories are Testable.
    • Size of the User Stories allow Development and Testing to be complete (DoneDone) within the Iteration.
  • Visual Task Kanban Board −
    • Depicts the status and progress of the Tasks
    • Bottlenecks are identified immediately as they occur
    • Facilitates to measure the cycle time which can then be optimized
  • Team Collaboration helps in −
    • Accountability of the entire Team for Quality product
    • Resolution of bottlenecks as and when they occur, saving on wait time
    • Contribution of every expertise in all the activities
  • Continuous Integration that focuses on Continuous Integration Testing
  • Automation of Tests to save on Testing Effort and Time
  • Defect Prevention with Test Cases written earlier to Development and mentoring the Developers on what is anticipated by different behaviors of the System −
    • WIP Limit to focus on a limited number of User Stories at a Time
  • Continuous Testing as the Development progresses, to ensure Defect Fixes within the Iteration −
    • Ensure Test Coverage
    • Keep the Open Defects Count Low

Story Exploration

Story Exploration is the communication within an Agile team to explore Story understanding when the product owner passes a story for acceptance for development.
The product owner comes up with the story based on the functionality expected by the system. The developers do more exploring on each story before they mark it ready for acceptance. Testers also participate in the communication from testing perspective to make it as testable as possible.
Finalization of the Story is based on constant and continuous communication among the Product Owner, Developers and Testers.

Estimation

Estimation happens in Release Planning and each Iteration Planning.
In Release Planning, the testers provide −
  • Information on what testing activities are required
  • Effort Estimation for the same
In Iteration planning, the testers contribute to deciding on what and how many stories can be included in an iteration. The decision depends on the Test Effort and Test Schedule Estimation. The Story Estimation reflects the test estimation as well.
In Kanban, Done-Done is accomplished only when a story is developed and tested and marked as complete without defects.
Hence, Test Estimation plays a major Role in story estimation.

Story Planning

Story Planning begins after a Story has been estimated and assigned to current Iteration.
Story Planning includes the following test tasks −
  • Prepare Test Data
  • Extend Acceptance Tests
  • Execute Manual Tests
  • Conduct Exploratory Testing sessions
  • Automate Continuous Integration Tests
In addition to these Testing Tasks, other tasks also may be required, such as −
  • Performance Testing
  • Regression Testing
  • Updates of related Continuous Integration Tests

Story Progression

Story Progression uncovers additional tests that are required resulted by continuous communication between the developers and testers. In situations where the developers need more clarity on implementation, testers perform exploratory testing.
Continuous Testing is performed during Story Progression and includes Continuous Integration Testing. The entire team participates in the testing activities.

Story Acceptance

Story Acceptance occurs when the story reaches the Done-Done state. i.e., the story is developed and tested and signaled as complete.
Story testing is said to be completed when all the tests relevant to the story pass or level of test automation is met.

Agile Testing - Tools

In Agile Projects, Testers are responsible for the following daily tasks −
  • Support the developers in coding, with clarifications on the expected behavior of the system.
  • Help developers in creating effective and efficient unit tests.
  • Develop automation scripts.
  • Integrate automation testing tools / scripts with continuous integration for regression testing.
For an effective and fast implementation of these tasks, a Continuous Integration (CI) system that supports CI of Code and test components is used in most of the Agile projects.
The testers and the developers in agile projects can benefit from various tools to manage testing sessions and to create and submit Defect reports. In addition to specialized tools for agile testing, agile teams can also benefit from test automation and test management tools.
Note − Record-and-Playback, Test-Last, Heavyweight, and Test Automation Solutions are not Agile as −
  • The test-last workflow encouraged by such tools does not work for Agile teams.
  • The unmaintainable scripts created with such tools become an impediment to change
  • Such specialized tools create a need for Test automation specialists and thus foster silos
The Tools that are widely used are −
S.No.Tool & Purpose
1
Hudson
CI Framework
2
Selenium
Functional Testing – Integrated with Hudson
3
CruiseControl
CI Framework
4
Junit
Java Unit Test
5
Nunit
.Net Unit Test
6
Cobertura / JavaCodeCoverage / JFeature / JCover /
Java Test Coverage
7
Jester
Java - Mutation Testing/ Automated Error Seeding
8
Gretel
Java Test Coverage Monitoring Tool
9
TestCocoon
C/C++ or C# - reduces the amount of Tests by finding redundant Tests and finds Dead Code
10
JAZZ
Java - Branch, Node, and Defuse Coverage and implements a GUI, Test Planners, Dynamic Instrumentation, and a Test Analyzer
11
Ant
Java – Automation Build
12
Nant
.Net - Automation Build
13
Bonfire
Agile Testing add-on for JIRA

Agile Test Automation Tools

Effective Agile test automation tools support −
  • Early test automation using a test-first approach.
  • Writing test automation code using real languages, domain specific languages.
  • Focusing on the expected behavior of the system.
  • Separating the essence of the Test from the implementation details, thus making it Technology independent.
  • Fostering Collaboration.
Automated Unit Tests (using Junit or NUnit) support test-first approach for coding. These are white-box tests and ensure that the design is sound, and that there are no defects. Such tests are built by developers with support from testers, and can be independent of the functionality that is required. This results in delivering a product that may not meet customer requirements and hence with no business value.
This concern is addressed by automating Acceptance Tests that are written with collaboration of customer, other stakeholders, testers and developers. The automated Acceptance Tests are written by the customers or product owners/business analysts reflecting the expected behavior of the product. The developers’ involvement ensures the production of code as per the requirements. However, if the testing is focused only on acceptance, the resulting code may remain non-extensible.
Thus, Automated Unit Tests and Automated Acceptance Tests are complimentary and both are needed in Agile Development.
Agile Tools and Frameworks that support Automated Acceptance Testing are −
  • Fit
  • Fitnesse
  • Concordion
  • Ruby
  • Cucumber

Fit

Ward Cunningham developed the tool Fit that can be used for Acceptance Test Automation. Fit allows −
  • Customers or Product Owners to give examples of product behavior using Microsoft Word and Microsoft Excel
  • Programmers to easily turn those examples into automated tests.
Fit 1.1 supports both Java and .NET.

FitNesse

FitNesse is a wiki, which is a style of web server that allows any visitor to make any edits, including changing existing pages and creating new pages. A simple markup language lets you easily create headings, make text bold, underline, and italic, create bulleted lists, and do other kinds of simple formatting.
In FitNesse, Acceptance Test Automation is as follows −
  • Express tests as tables of input data and expected output data.
  • Use FitNesse to put the test table on the page that you can edit.
    • Alternatively, put the test table in Microsoft Excel, copy to clipboard and then use the Spreadsheet to FitNesse command to have FitNesse format your table properly
  • Run the test
  • You get the test results by color coding of the cells in the test table
    • green cells represent that the expected values are obtained
    • red cells represent that a different value than what you expected is obtained
    • yellow cells represent that an exception was thrown

Cucumber

Cucumber is a tool based on Behavior Driven Development (BDD) framework. The key features are −
  • Is used to write acceptance tests for web applications.
  • Allows automation of functional validation in easily readable and understandable format like plain English.
  • Was implemented in Ruby and then extended to Java framework. Both support Junit.
  • Supports other languages like Perl, PHP, Python, .Net etc.
  • Can be used along with Selenium, Watir, Capybara, etc.
Share this article :

Activities

Latest Post

SCRIPTS

DATABASES

BIG DATA & ANALYTICS

RELATIVE ARTICLE

 
Support : Copyright © 2014. SUNTEC CAMPUS TUTORIAL - All Rights Reserved
Site Designed by Creating Website Inspired Support
Proudly powered by Sun Microcreators