One of the fundamental concepts of software testing is that in most situations you cannot exhaustively test all possible inputs to your system under test because there are simply too many inputs. For example, if you are testing some poker game application and some method accepts a representation of 7 cards, if duplicate cardss are allowed, there are 52 to the 7th power inputs = 1,028,071,702,528 test case inputs. Therefore, a significant part of software testing involves various techniques designed to select a subset of useful test case inputs from the set of all possible test case inputs. Equivalence class partitioning divides all test case inputs into subsets that are equivalent in some way. The idea is that you need only test representative inputs from each equivalence class. Determining equivalence classes is much easier said than done however. A close cousin technique is boundary value analysis. Here you use input values at, above, and below the values which define equivalence classes. Research has shown that these boundary values are relatively more likely to cause errors than non-boundary values. Pairwise testing is a technique which uses all possible pairs of input values from different input parameters. Another technique to reduce the number of test case inputs is to simply send random input to the SUT. Although not particularly effective, random input testing is easy to implement and when it does reveal a bug, the bug is usually a serious crashing or hanging bug. A cousin of random input testing is a technique I call called partial antirandom testing. Here you create a set of random test case inputs which are maximally different f6rom each other. The idea is that similar inputs will reveal similar information about the SUT, so maximally different test case inputs will reveal more information than simple random input testing. The current issue of MSDN Magazine has an article I wrote about the technique: http://msdn.microsoft.com/en-us/magazine/ee309511.aspx .