Different types of Software Testing

Introduction

Contents

Testing is the process of executing a program with the aim of finding errors. To make our software perform well it should be error-free. If testing is done successfully it will remove all the errors from the software. There are different types of software testing that are used for testing an application Web-based, Mobile-based, API based.

Different Types of Testing Automation Laboratories

 

Principles of Testing

  • All the tests should meet the customer requirements.
  • To make our software testing should be performed by a third party
  • Exhaustive testing is not possible. As we need the optimal amount of testing based on the risk assessment of the application.
  • All the test to be conducted should be planned before implementing it
  • It follows the Pareto rule(80/20 rule) which states that 80% of errors come from 20% of program components.
  • Start testing with small parts and extend it to large parts.

 

Black Box TestingBlack Box | Automation Laboratories

Black Box testing examines an application without peering into its internal structures or workings.

Grey Box Testing

Grey Box testing is done when a tester partially knows the internal structure of an application as the algorithms used.

White Box TestingWhite Box | Automation Laboratories

White Box testing tests the internal structures or workings of an application.

Functional

Testing of a software system or component based on its specifications: what the system/component does, i.e. how compliant it is with specified functional requirements. Functions are fed input and the output is examined. The internal program structure is rarely considered. 

Non - Functional

Testing of a software application or system for its non-functional requirements: the way a system operates, rather than its specific behaviors.

Different types of Software Testing | Automation Laboratories

Unit testing is a level of software testing where individual units/ components of the software are tested (usually made by developers).

Integration testing is a level of software testing where individual units are combined and tested as a group (usually made by developers, sometimes by QA engineers who specify in a certain technical area).

System testing is the process that evaluates how the various components of an application interact together in full. System testing verifies that an application performs tasks as designed. (Examples: usability, load, recovery, more)

Acceptance testing is a level of software testing where a system is tested for acceptability. The purpose of this test is to evaluate the system’s compliance with the business’s requirements and assess whether it is acceptable for delivery.

Subtypes of Acceptance testing. 

  1. Alpha testing
  2. Beta testing
  • Alpha testing is performed to identify all possible issues/bugs before the product is released to end-users/public. It is usually done by designated testers/internal employees and is done early on, near the end of the development phase, and before beta testing.
  • Beta testing is done by releasing a beta version of the software to a limited number of end-users to obtain feedback on the product quality. Beta testing reduces product failure risks and provides increased product quality through customer validation.

End-to-end testing checks whether an application flow – from start to finish – is behaving as expected (over the full user journey).

End-to-end testing is performed to identify system dependencies and to ensure that data integrity is maintained between various system components and systems.

Smoke testing is performed after receiving a software build to ascertain that the critical functionalities of the program are working fine. It is executed before any detailed functional or regression tests are executed on the software build.

Sanity testing is performed after receiving a software build with changes in code or functionality, to ascertain that the bugs have been fixed and that no further issues have been introduced due to these changes. (More)

Regression testing is performed to confirm that a recent program or code change has not adversely affected existing features, i.e. that new code changes have not caused side effects for the existing functionalities. It ensures that the old code still works once the new code changes are done.

Performance testing determines the speed of a computer, network, or device. It checks the performance of the components of a system by passing different parameters in different load scenarios.

Load testing simulates the actual user load on any application or website. It checks how the application behaves during normal and high loads and is applied when a development project nears completion.

Stress testing determines the stability and robustness of a system. It is a non-functional testing technique that uses an auto-generated simulation model to check all the hypothetical scenarios.

Positive testing, which uses the valid data as input, checks whether an application behaves as expected with positive inputs.

Negative testing ensures that your application can gracefully handle invalid input or unexpected user behavior.

Boundary testing

With this technique, test cases are designed to include values at the boundary. If the input data used is within the boundary value limits, then it is said to be positive testing. If the input data picked is outside the boundary value limits, then it is negative testing.

For example, testing user’s age, with allowed to register 18 to 150.

We check:

  • 18, 45, 150 - positive checks inside the range.
  • 17, 151 - negative checks outside the range.

Explore more blogs on Development, Testing and DevOps follow the link










Minakshi Kumar
Over 4+ years of experience in the software testing process committed to adding value to the end-product through detailed quality assurance testing. Worked on writing Test Cases, Test Planning, Test Environment Setup, Test Data Setup, Defect Management, Test Log, Rest results, Test Traceability Matrix. Has the ability to test backend applications by writing SQL scripts. Extensive experience in developing Test Traceability Matrix and Gap Analysis. Well-versed in analyzing Results, Bug Tracking & Reporting, Detailed status reporting.

Comments are closed.