Just Test the Darn Thing

I am a Contributing Editor for MSDN Magazine and write a monthly column about software testing. As part of my background research I talk to as many software test managers as I can. One common theme in discussions with test managers is a pitfall I’ll call "the test architecture error". In simple terms, it is easy for software testers to lose sight of their primary goal: analyze a software system in order to find bugs so those bugs can be fixed and improve the quality, reliability, and performance of the target system. According to many of the managers I’ve talked to, some software test engineers spend far too much time designing their test effort and not nearly enough time actually testing the target software system. I know I’ve been guilty of this in the past. Sometimes you just have to sit down and simply test the heck out of a system and not worry about architecting a test framework of some sort. A closely related scenario occurs with test automation. It is easy to get seduced into spending weeks creating some test automation only to have your automation be rendered obsolete before it can be used. One reason this can happen is related to how the job performance of software test engineers is sometimes evaluated by inexperienced managers. By creating test automation, a tester has something tangible and often impressive to demonstrate to show progress, even if the automation is not all that useful. It is much harder to demonstrate progress with a list of test cases that may in fact be far more useful than slick test automation. My point is that test automation and test architecture are useful in many situations but may not always the best way to test a system, especially an application which will ultimately be used by actual human beings. Basically, I think that if you focus on doing those test activities that best improve the quality of your target system based on all the complicated factors of your particular development environment — maybe manual testing, designing frameworks, writing automation, or whatever — and carefully explain to your management exactly why you are doing what you are doing, the value of your work will be appreciated.
This entry was posted in Software Test Automation. Bookmark the permalink.

5 Responses to Just Test the Darn Thing

  1. Ginna says:

    I see far too often that test automation is being down played as something as simple as “Here are the manual test cases, go automate them.” By doing so, the automation is simply automating human strokes, not test automation. Most software/system under tests has many decisions making points and has some level of intelligent. Test automation needs to mirror the software/system under test. It needs to be as flexible and as intelligent as the software/system under test, if not more. This requires some upfront time for designing and architecture, and some thinking as you go. However, if the test automation is simply to automate human strokes, then the times spend on design, architecture and coding is a waste of time, because it truly is “the test architecture error”.

  2. James says:

    I think we are in agreement (please correct me if I am not) and we are both suggesting that for quick and dirty test automation (the kind designed to automate simple, repetitive tasks, typically for regression testing purposes), then spending a lot of time on design is generally not useful. But for sophisticated test automation (the kind designed to automate complex scenarios with perhaps even some measure of partial AI) then time spent for test architecture is definitely warranted.

  3. Ginna says:

    We most definitely are in agreement. So few understand it and so few give it the attention and time require which is a shame.

  4. Scott says:

    It sounds like you\’re advocating ad-hoc testing, which is not a good idea. A designed automated test is repeatable and in the case of regression tests, much faster than manual testing. For instance for new functionality for a product my team had, the regression test, which must be performed at every release took three man days to do manually, after automating it, it only took 15 minutes. This was a huge time savings. Its better to set up the processes necessary to prevent bugs in the first place. That means defined, designed, repeatable tests. How many times have you seen situations where people decided to "just test the darn thing" and reported defects, only to find that the defects were actually due to incorrect or sloppy tests?

  5. James says:

    Scott, I think we are talking about two different issues. I agree with you that sloppy test design is really, really bad. My point is that the test managers I have talked to have reported that they have come across "test architects" who have spent literally weeks, or in some cases months, designng some test system which never was implemented, and therefore the test system never executed a single test case. On the other hand, I personally know some test architects who are worth their weight in gold because they can create a robust test system which is manageable, flexible, and scalable. All I\’m trying to point out is that at the end of the day, software testing is about improving the quality of the system under test.

Comments are closed.