RPP – Program Test Strategy (3525558)
A test plan is to be composed for the system depicted below. This flowchart diagram is not a UML activity diagram and describes a system that takes three variables, reads them, sequentially compares each of them and then prints the highest value.
Due to the ‘greater than’ operands in use in the above diagram, the functions involve manipulation of numbers. String variables would be usable if the operations were comparing lengths instead of amounts.
A check / ‘if’ calculation occurs where a ‘decision diamond’ exists. Ovals depict the program’s start and end while rectangles without slants define operations without choice that are readable as mandatory in every flow.
Rectangles with slants depict operations that are reliant on decisions and the mandatory operations. Operation flows in the direction of the displayed arrows e.g. ‘is a>c?’ is after the ‘is a>b?’ decision if the result is ‘true’.
Variables are initialised and declared when the program is started – and the program stops when a variable is printed. In coding, although unequivocally unintuitive for this case, the ‘if’ clauses could be complemented by using object variables with Boolean values.
Joining of flow arrows indicate processes that have the same result. Where decisions are, the multiple paths are labelled with the causes of their activation e.g. the cause of a ‘true’ arrow being the path is the ‘true’ result of the preceding decision.
A test strategy for this program would have to check the consistency of each operation. Excluding the ‘decisions’, there are three different operations; ‘declaring’, ‘reading’ and ‘printing’.
Not each step but every stage of the flow must be verified for consistency. This includes verifying the declaration process, reading the variables, the mechanism for twice comparing two of the variables, printing the returned result correctly and then the closing of the program’s loop.
All operations (each including variables) after dependant variable value declaration are constrained by the decisions that they are results for. Therefore, before any other focus, the program’s variables must be determined to be usable in the decisions.
The declaration of variables should only allow for the desired data type; integers. Therefore, the code for this ‘declaration’ process can be condensed to test the utility of various values. Once that stage is passed, the remaining operations in the loop can be further assured.
In the correctly functioning program, (a)/(b) must be carried over to the next ‘if’ clause to compare correctly with (c). This would be tested by isolating the code function to ensure than intended results are consistently returned given valid variable values.
After the ‘gt’ operations are found assuredly successful, validating consistency of the display mechanism is required. Testing the printing of values could involve different output types (e.g. ‘print’, ‘prinln’, ‘printf’, ‘puts’), lengths or destinations.
Testing may involve the identification of inapplicable numeric datatypes. The maximum value of an integer data type must be determined and considered as a limitation. To provide a fair level of functionality (high numbers with decent performance) within a theoretical Java framework, a ‘double’ datatype (64bit, 2^63 -1) is desirable over ‘int’ to allow for input of decimals.
Additionally, the calling of the program is testable with different call sources and its automated closing is to be verified. Initial numbers usable for the test, with the system using a sufficiently functional datatype, are:
Boundary Value conditions
Boundary value conditions build on differential equations by including constraints. They are the foundation of typical automated ‘unit testing’.
In calculus terms, ‘boundary value conditions’ are definitions of constraints for an operation’s invalidity. A BVC is included in a mathematical equation that intends for derivatives and refers to itself for calculative validity.
In software testing, which fundamentally is calculus, BVCs can describe any practical or digital constraints for a programming function’s computation given specific variables.
Boundary value analysis involves the testing of a system and its components when the given values are at the extremities of the equivalence class, documented datatype capability or other range of limitation.
Maximum and minimum boundary value constraints (sets) are determined by the operational process’ limits and must lie at the ends of within the given range. Therefore, datatypes may not be the only source of boundaries as instead business protocols might provide them.
Both valid and invalid sets of boundary values should be tested. ‘Invalid’ describes ‘upper’ and ‘lower’ values that are one increment outside of a ‘valid’ range. ‘Valid’ boundary values are defined by the operand related to the function. In this case, the related operand is ‘>’. If it was ‘=>’, the resulting boundary values would be -1.
Conventions for this type of testing vary, with some plans including +1 and -1 boundary value sets to add to the exact boundary value testing. Irrespective of conceptual similarity, BVC analysis can involve the criteria of both length and count.
Some testable boundary values to test amounts are:
Each decision (‘if’) effectively creates a boundary condition in computation, as in a>b = boundary (b).
So for a = 10, b = 12, c = 15:
Testing the declaration and subsequent computation of these boundary values when input to each of the given variables (dependent) would be an integral part of this program’s testing.
Example Test Schedule Table
An expandable test schedule for operational consistency of variable values:
It is practically a mandate to test a product before distributing it and, for that testing, a plan should be designed with the utilisation of industry-standard conventions.
Boundary value constraints provide the main basis for foundational program testing, ensuring that the data inputs are functioning as desired.
This test plan could have ‘equivalence partitioning’ applied to it to shorten the list of test cases while still providing the same requirement-based conclusions.
Nevertheless, the described approach would ensure that the program operates as desired, if not defining elements to fix.