By Theresa Lynn Sanderfer Brown


Abstract.

There is a myth that if we were really good at programming, there would be no bugs to catch. This is not true. In this paper, I will discuss fundamentals of software testing that expel some of these myths. I will then discuss some specific software testing techniques.

What are the principles of software testing?

Once source code has been generated, software must be tested to uncover and correct as many errors as possible before it is delivered to the customer. The goal is to design a series of test cases with a high likelihood of finding errors.

There are techniques that provide systematic guidance for designing tests that exercise the internal logic and interfaces of every software component. They also exercise the program’s input and output domains to uncover errors in program function, behavior and performance. The program must be executed several times to find and remove all errors before it gets to the customer.

In order to find the highest possible number of errors, software tests must be conducted systematically, not randomly. The following principles should guide every software test program:

  1. A test case must include: • A description of the input data to the program. • A precise description of the correct output of the program for that set of input data.
  2. Don’t allow programmers to test their own programs.
  3. Don’t allow programming organizations to test their own programs.
  4. Thoroughly inspect the results of each test.
  5. Write test cases for invalid and unexpected and valid and expected input conditions.

Examining a program to see if it doesn’t do what it’s supposed to do is only half the battle.

  1. Avoid throwaway test cases unless the program is truly a throwaway program.
  2. Don’t assume no errors will be found when you plan a testing effort.
  3. If you found a number of errors already in a section, there are probably more errors.
  4. Remember that testing is an extremely creative and intellectually challenging task.

For conventional applications, software is tested from two different perspectives: “white box” (for internal program logic) and “black box” (for software requirements). When you begin testing, you should try hard to break the software. Design disciplined test cases and review them thoroughly. You can also evaluate test coverage and track error detection activities.

In his book “Software Engineering: A Practitioner’s Approach,” Roger S. Pressman lists the following attributes of a good software test:

  1. “A good test has a high probability of finding errors.” To achieve this goal, the tester must understand the software and develop a mental picture of how the software might fail. To do this, chart the flow of the function and step through each box, considering the potential outcomes. Use process failure mode and effect analysis for a given function (also called a value stream). At each major step, consider: • The components and subcomponents. • The component’s functionality. • What could go wrong with the component. • What it would take to prevent the negative action.This same principle is very applicable to testing software.
  2. “A good test is not redundant.” Testing time and resources are limited. There is no point in conducting a test with the same purpose over and over.
  3. “A good test should be ‘best of breed.’” In a group of tests with similar intents, time and resource limitations may allow you to execute only a subset of these tests. In such cases, you should use the test with the highest likelihood of uncovering a whole class of errors.
  4. “A good test should be neither too simple nor too complex.” Although it is sometimes possible to combine a series of tests into one test case, the possible side effects may actually hide errors. In general, each test should be executed separately but thoroughly.

White Box Testing

As previously mentioned, one type of testing is white box testing. This allows you to examine the internal structure of the program. White box testing derives test data by examining the program’s logic. The goal at this point is exhaustive input testing, which causes every statement in the program to execute at least once. If one test case exhausts all possible paths of flow through the program, it is possible you have completely tested the program.

Though you can test every path in a program, the program might still be loaded with errors. There are three explanations for this:

  1. An exhaustive path test does not guarantee that a program matches its specification. For example, if you were asked to write an ascending-order sorting routine but mistakenly produced a descending-order sorting routine, exhaustive path testing would be of little value. Although the program ran fine, it would still have one bug. It would be the wrong program, as it did not meet the specifications.
  2. A program may be incorrect because of missing paths. Exhaustive path testing, of course, would not detect the absence of necessary paths.
  3. An exhaustive path test might not uncover data-sensitivity errors. Suppose you have to compare two numbers for convergence, seeing if the difference between the two numbers is less than some predetermined value. Leaving off the absolute value sign in this equation would cause a different result. Unfortunately, you may not detect that error just by executing every path through the program.

Black Box Testing

In black box testing, you focus on finding circumstances in which the program does not behave according to its specifications. Test data is derived solely from the specifications, not from the program’s internal structure. If you use this approach to find all errors in the program, the criterion is once again exhaustive input. In black box testing, you should test all valid inputs and all possible inputs.

Equivalence Partitioning

Equivalence partitioning is a black-box testing method that divides the input domain of a program into blocks of data from which the test case can be derived. An ideal test case single-handedly uncovers multiple errors. Equivalence partitioning strives to define a test case that uncovers classes of errors, thereby reducing the total number of test cases that must be developed. Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.

Boundary Value

In boundary value analysis, you select test cases at the “edges” of the class. Boundary conditions are directly on, above and beneath the edges of input equivalence classes and output equivalence classes. Boundary-value analysis differs from equivalence partitioning in two ways:

  1. Rather than selecting any element in an equivalence class as representative, boundary-value analysis requires that one or more elements be selected so that each edge of the equivalence class is the subject of a test.
  2. Boundary value test cases are derived by considering the result space (output equivalence classes), rather than just focusing attention on the input conditions (input space).

Zero Crossing Testing

Zero-value testing includes the input values of zero, zero crossing, approaching zero from either direction or similar values for trigonometric functions. The fundamental principle behind zero crossing tests is zero crossing, the point when the result of the function will change signs (either from positive to negative or from negative to positive). When this happens, unexpected results occur.

Orthogonal Array Testing

There are many applications in which the input domain is relatively limited. That is, the number of input parameters is small and the values each of the parameters may take are clearly bounded. When these numbers are very small (e.g., three input parameters taking on three discrete values each), it is possible to consider every input permutation and exhaustively test the input domain processing.

However, as the number of input values grows and the number of discrete values for each data item increases, exhaustive testing becomes impractical or impossible. Orthogonal array testing can be applied to problems in which the input domain is relatively small but too large to accommodate exhaustive testing.

For example, consider a system that has three input items, X, Y and Z. Each of these input items has three discrete values associated with it. There are 33 = 27 possible test cases. The orthogonal array testing method is particularly useful in finding errors associated with regional faults rather than all possible faults. Orthogonal testing is also beneficial at finding error categories associated with faulty logic within a software component.

Fault-Based Testing

Fault-based testing occurs when the tester intentionally injects faults and executes tests to ensure the product or software will correctly stop at erroneous information.

Testing for Real-Time Systems

The time-dependent, asynchronous nature of many real-time applications adds a new and potentially difficult element to the testing mix: time. The test case designer has to consider conventional test cases, event handling (i.e., interrupt processing), the data timing, and the parallelism of the tasks (processes) that handle the data.

In many situations, test data provided when a real-time system is in one state will result in proper processing, while the same data provided when the system is in a different state may lead to error. Also, the relationship between real-time software and its hardware environment can also cause testing problems. Software tests must consider the impact of hardware faults on software processing. Such faults can be extremely difficult to simulate realistically.

Task Testing: The first step in testing real-time software is testing each task independently. Conventional tests are designed and executed for each task.

Behavioral Testing: Using system models created with automated tools, it is possible to simulate the behavior of a real-time system and examine its behavior as a consequence of external events. These analysis activities can serve as the design basis for test cases conducted when the real-time software has been built.

Intertask Testing: Once errors in individual tasks and system behavior have been isolated, testing shifts to time-related errors. Asynchronous tasks known to communicate with one another are tested with different data rates and processing loads to determine if intertask synchronization errors will occur. Tasks that communicate via a message queue or data store should be tested to uncover errors in the sizing of these data storage areas as well.

System Testing: Software and hardware are integrated, and a full range of system tests is conducted in an attempt to uncover errors at the software/hardware interface. System-level tests of real-time systems should address the following questions:

  • Are interrupt priorities properly assigned and handled?
  • Is processing for each interrupt handled correctly?
  • Does the performance of each interrupt-handling procedure conform to requirements?
  • Does a high volume of interrupts arriving at critical times create problems in function or performance?

Extreme Testing

In the 1990s, a new software development methodology called extreme programming (XP) was born. Extreme programming is a lightweight, agile development process that supports programming languages such as Java, Visual Basic and C#. The XP model relies heavily on module unit and acceptance testing. In general, you must run unit tests for every incremental code change, no matter how small, to ensure the code base still meets its specifications. In fact, testing is so important in XP that the process requires you to create the unit (module) and acceptance test first and then create your code base. This form of testing is called, appropriately, “extreme testing.”

XP focuses on implementing simple designs, communicating between developers and customers, constantly testing your code base, refactoring to accommodate specification changes, and seeking customer feedback. XP tends to work well for small- to medium-size development efforts in environments with frequent specification changes and where near-instant communication is possible. It differs from traditional development processes in the following ways:

  • It avoids the large-scale project syndrome in which the customer and programming team meet to design every detail of the application before coding begins.
  • It avoids coding unneeded functionality. The software team focuses on the task at hand, adding value to a software product and focusing only on the required functionality that helps create quality software in short time frames.
  • It primarily focuses on testing. Traditional software development models suggest you code first, then create interfaces later. In XP, you must create the unit test first and then create the code to pass the tests. There are 12 principles that are noteworthy to be mentioned in this dissertation:
  1. Planning and requirements • Marketing and business development personnel gathering work together to identify the maximum business value of each software feature. • Each major software feature is written as a use case or user story. • Programmers provide time estimates to complete each user story. • The customer chooses the software features based on time estimates and business value.
  2. Small, incremental releases • Strive to add small, tangible, value-added features and release a new code base often.
  3. System metaphors • Your programming team identifies an organizing metaphor to help with naming conventions and program flow.
  4. Simple Designs • Implement the simplest design that allows your code to pass its unit tests. Assume change will come, so don’t spend a lot of time designing; just implement.
  5. Continuous Testing • Write unit tests before writing their code module. Each unit is not complete until it passes its unit test. In addition, the program is not complete until it passes all unit tests and acceptance tests are complete.
  6. Refactoring • Clean up and streamline your code base. Unit tests help ensure that you do not destroy the functionality in the process. You must rerun all unit tests after any refactoring.
  7. Pair Programming • You and another programmer work together, at the same machine, to create your code base. This allows for real-time code review, which dramatically increases bug detection and resolution.
  8. Collective ownership of the code • All code is owned by all programmers. No single base programmer is dedicated to a specific cod e base.
  9. Continuous integration • Every day, integrate all changes, after it passes the unit tests, back into the code base.
  10. 40-hour work week • No overtime is allowed. If you work with dedication for 40 hours per week, the overtime will not be needed. The exception is the week before a major release.
  11. On-site customer • You and your programming team have unlimited access to the customer so you may resolve questions quickly and decisively, which keeps the development process from stalling.
  12. Coding Standards • All code should look the same. Developing a tailored system coding standard helps meet this principle.

In conclusion, it is important to follow the principles of software testing and try a variety of testing techniques, including white-box and black-box testing. It is also crucial to learn more about the modern techniques of extreme programming and extreme testing. I hope you’ve learned more about software testing and that these principles can serve as the basis and a good foundation for a healthy test program.

Figures and Tables:

Figure 1: Orthogonal Testing Representation ( Click to view image )


References and Notes

  1. Myers, G. J. (2004.) “The Art of Software Testing.” Word Association, Inc.
  2. Pressman, R. S. (2005.) “Software Engineering: A Practitioner’s Approach, Sixth Edition.”
  3. McGraw-Hill International Edition.
  4. Wikipedia, The Free Encyclopedia: Software Testing, Zero Crossing.

Theresa Lynn Sanderfer Brown

Click to view image

Lynn Brown has worked with software
for more than 20 years and received her
master’s degree in software engineering
from the University of Alabama in
Huntsville, Alabama, a school known
for its accomplishments in science and
technology. Most recently, Lynn worked
as a lead in software safety evaluations
for the Army Navy Transportable
2 (ANTPY-2) radar. Lynn is a certified
quality engineer, holds A+ certification,
and has a degree in electrical engineering.
She aspires to become a certified
software quality engineer. A native of
Athens, Alabama, Lynn enjoys dance, art
and her work in the ministry.


« Previous Next »