Test procedures for quality assurance in software development
Seek and ye shall find
The quoted saying has a special meaning in the field of software development. The process of quality assurance consists to a substantial part of the search for program errors. And in practically every program, errors can be corrected. These must be documented, evaluated and corrected. After all, the fewer errors the final product has, the higher the quality of the program and the more satisfied the customer is.
Quality assurance begins with programming
The programmer of a software itself is the first instance of quality assurance. His first task is to create the program function exactly according to the specifications of the concept. After implementation, he must call the programmed function at least once and check whether it delivers the desired result.
A popular method for this is the programming of so-called unit tests. These are program extensions that are created for the sole purpose of testing. Unit tests first create the conditions for a test and then call a special program function. Then the result delivered by the program function is checked. If it matches the expected result, the test is o.k., otherwise an error is assumed.
Unit tests are usually implemented at the level of more or less elementary functions at the business logic level or even below. They are therefore less suitable for user interface testing, but are an excellent means of having the implemented program functions tested directly by the programmer himself. In .NET, NUnit is often used to run unit tests.
After completion: Who tests?
A high-quality program is characterized by the fact that it has the agreed scope of services and few errors. During quality assurance, the first step is to check whether the program contains the performance features listed in the implementation concept.
In the further course of quality assurance, the stability of the program must be checked. True to the motto “Seek and you shall find”, the program is first put through its paces by the manufacturer’s staff. Any errors found are documented in a ticket system and corrected before the program is delivered.
Once all errors have been corrected, the program is rolled out on a so-called staging system. This should be set up in exactly the same way as the subsequent production system. On the staging system, the customer’s employees test the programmed software under conditions that are as close to reality as possible. As soon as no more errors are found, the circle of testers is expanded. As long as it concerns coworkers of the customer, one speaks of an internal field test. If external partners are also involved, the external field test has begun.
The cycle of debugging and bug fixing is an iterative process that should lead to an increasingly stable program. When no more new bugs are found and no more errors need to be fixed, the program can go live.
Test plan and test protocol
A test plan is a document that contains several test cases. Each individual test case consists of a description of a sequence of actions to be performed with the program under test. Thus, for each step there is a description of the sequence in which the user should click, enter data or perform other actions. An important part of the test case is the naming of the expected result.
The test protocol is created from the execution of a test plan. Here the tester notes the observed result for each expected result. Example: The value 5 is expected, but the program displays the value 276. In this case, the question arises whether the test case or the program contains an error. This must then be clarified in detail and possibly also in consultation with the customer.
With a well-written test plan, the test can check a high proportion of the program functions. It is particularly suitable for checking the side effects on other parts of the program after a new program function has been completed. The disadvantage of a test plan is at the same time an advantage: Adapting the test plan to a new program version causes a certain amount of work. However, in this work the expected result must be determined for each test case. The test plan therefore also contains a fairly precise description of what a program is supposed to do.
Automated user interface testing
There are separate applications for automated testing of programs via the user interface. The Microsoft Visual Studio development environment contains program functions for testing the GUIs of web and Windows desktop applications. Under the name “Selenium” a test framework is known, which was developed particularly for the test of Web applications on different platforms. GUI tests simulate mouse clicks and user input. Therefore, they are particularly susceptible to program changes and cause a high effort in adapting to new program versions.
Effort for troubleshooting and error correction
The effort for debugging and bug fixing grows exponentially to the project progress. The programmer only needs to start the software or unit test from his development environment. In most cases, it only takes a few mouse clicks to call the program function that he wants to test. If an error occurs, he can fix it directly.
More people are involved in the tests in later project phases. The internal tests with test plan and test protocol involve internal employees, while the field test also involves external employees. This increases the effort very much. It starts with the fact that the program can no longer simply be started from the development environment. Installation files have to be created and rolled out on the staging system.
The testers have to be informed about the existence of a new test version and its program functions. The testers then receive documentation of the program functions to be tested. Any errors found must be documented, evaluated and forwarded to the programmer. And then the cycle starts all over again. The time-consuming iterations only end when no more errors are found that could prevent operation.