Software Testing – What, Why, When to start and stop?
Software Testing is a process of validating and verifying that a software product meets the business and technical requirements that guided its design and development.
The overall objective to not to find every software bug that exists, but to uncover situations that could negatively impact the customer, usability and/or maintainability.
It involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the system under test:
- achieves the general result its stakeholders desire.
- responds correctly to all kinds of inputs,
- performs its functions within an acceptable time,
- is easy to use.
Software testing can provide objective, independent information about the quality of software and risk of its failure to users and other stakeholders.
Software testing can be conducted as soon as a working prototype is available.
Software Testing is an investigation to know about the quality of the system under test.
The overall approach to software development often determines when and how testing is conducted.
For example, in a phased process, most testing occurs after system requirements have been defined and then implemented in testable programs. In contrast, under an Agile approach, requirements, programming, and testing are often done concurrently.
As the number of possible tests for even simple software components is practically infinite, all software testing uses some strategy to select tests that are feasible and relevant for the available time and resources.
As a result, software testing typically attempts to execute application with the intent of finding software bugs that could have negative impact to product and business.
Why software is tested?
Software developed by developers has to be tested before passing on to the customer and there is too many reasons for it. Some of the main reasons are :
- To err is human – Testing is necessary because we all make mistakes. We need to check everything and anything we produce because things can always go wrong – humans make mistakes all the time – it is what we do best! Because we should assume our work contains mistakes, we all need to check our own work. However, some mistakes come from bad assumptions and blind spots, so we might make the same mistakes when we check our own work as we made when we did it. So we may not notice the flaws in what we have done. Ideally, we should get someone else to check our work – another person is more likely to spot the flaws.
- Lack of experience – Chances of making mistakes is compounded when we lack experience, don’t have the right information, misunderstand, or if we are careless, tired or under time pressure. All these factors affect our ability to make sensible decisions – our brains either don’t have the information or cannot process it quickly enough.
- Dealing with complex technical or business problems – We are more likely to make errors when dealing with perplexing technical or business problems, complex business processes, code or infra-structure, changing technologies, or many system interactions. This is because our brains can only deal with a reasonable amount of complexity or change, when asked to deal with more and that too in less time our brains may not process the information we have correctly.
- Determine that products satisfy specified requirements – Some of the testing we do is focused on checking products against the specification for the product; for example we review the design to see if it meets requirements, and then we might execute the code to check that it meets the design. If the product meets its specification, we can provide that information to help stakeholders judge the quality of the product and decide whether it is ready for use.
- Detect defects – We most often think of software testing as a means of detecting faults or defects that in operational use will cause failures. Finding the defects helps us understand the risks associated with putting the software into operational use, and fixing the defects improves the quality of the products. However, identifying defects has another benefit. With root cause analysis, they also help us improve the development processes and make fewer mistakes in future work.
- It is essential since it makes sure of the Customer’s reliability and their satisfaction in the product.
- To ensure the Quality of the product – Quality product delivered to the customers helps in gaining their confidence. Testing is necessary in order to provide the facilities to the customers like the delivery of high quality product or software application which requires lower maintenance cost and hence results into more accurate, consistent and reliable results.
- Testing is required for an effective performance of software application or product.
Also Read: Types of Software Testing, Test LifeCycle, Software Development Modals
- To minimize cost – It is important to ensure that the application should not result into any failures because it can be very expensive in the future or in the later stages of the development.
- To stay in the business – If the product quality is bad it can destroy the company’s reputation and with so much competition these days and so many options available to the customers, bad quality has a potential of bringing down a company.
When to perform Testing?
Testing has to be performed for every new product delivered or for every new functionality implemented in an existing or for every modified functionality.
In short, you perform testing whenever there is a change in the system, no matter how small it is.
What type of testing to perform or when to perform it i.e. at what phase it has to be performed depends on various factor like time available for testing, knowledge of tester, risks involved, type of application, type of development model etc.
Please refer to separate post to know more about types of testing, Software development life cycle and Software development methodologies to better understand the question and think it over.
When to stop testing?
You would want to stop testing after you have executed all the possible combinations and found all bugs. Wouldn’t you? But it can only be done by Alice as she lives in Wonderland. In our world, exhaustive testing is a myth.
When to stop testing is one of most difficult decisions tester has to take.
Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done.
Common factors in deciding when to stop could be:
- Completion criteria can be derived from test plan and test strategy document. Also, re-check your test coverage.
- Completion criteria should be based on Risks.
- Test cases completed with certain percentage passed and test coverage is achieved.
- There are no known critical bugs
- Coverage of code, functionality, or requirements reaches a specified point;
- Defect rate falls below a certain level, now testers are not getting any priority 1, 2, or 3 bugs.
- As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with agreed percentage of testing done. The risk can be measured by Risk analysis.
- Build and installation completely automated and all tests (unit, gui, integration, system) executed automatically.
It all boils down to the confidence level. Do you feel confident that the system is tested enough?
Obviously, “confidence level” is highly subjective since you can never feel completely certain, but certain enough — and that is what we are looking for. For that, you need to create a list of common factor like mentioned above, commonly known as definition of done and should be something your whole team agrees upon.
Here, we have tried to give you a bird’s eye view of software testing. We continuously want to improve and appreciate you comments and suggestions in order to do so .