Why automation has failed in the past

August 01, 2008

Why automation has failed in the past

To help ensure a successful automation strategy now and in the future, test organizations need to take a critical look at their automation criteria, t...

 

For the past 15-plus years, organizations have turned to automation as a way to improve efficiency among testers and developers. While some test organizations have reaped the value of automation, others have yet to achieve the ROI they were hoping for. Many also have found the work to be much more demanding than they had expected.

Why has automation failed at these organizations? Poor planning, deficient tools, and resistance to change are likely culprits. These three issues can undermine the potential of any automation strategy, resulting in more work with less payback. To help ensure a successful automation strategy now and in the future, test organizations need to take a critical look at their automation criteria, technology, and processes.

Deciding what to automate

Organizations sometimes jump into automation without carefully considering what to automate. For example, many decide to automate the most complicated items, such as a difficult test case, thinking that automation will free them of this headache. But soon they find themselves spending more time automating than testing.

Organizations can achieve greater ROI from automation simply by choosing to automate mundane, repetitious tasks. If an organization must decide between automating one test case that requires eight hours to run or 100 test cases that each require an hour to run, the choice should be clear. By opting to automate the 100 test cases, organizations would receive far better payback. This example illustrates why following someone's first instinct is not always the best approach.

When deciding what to automate, organizations should consider the following questions:

  • How often do these items get used?
  • How many people use them or rely on them?
  • Are these components part of a test that everyone has to use?
  • Are they difficult to maintain?
  • Which items do my customers add (or not add) value to?

Another reason for not automating the most challenging test cases is that human beings add value. The judgment, intuition, and experience that people bring to the job, particularly with ad hoc testing, can greatly enhance a difficult design. Although automation can be leveraged effectively for complex systems testing - for example, by helping run complex tasks simultaneously or replicating complex tasks - it might not be the best approach for test cases that rank among the top 5 percent in difficulty.

Bottom line: When building an automation strategy, teams must decide what to automate based on what will yield the greatest benefit overall, not simply aim for the most technically ambitious projects. As illustrated in Figure 1, organizations should put aside the desire to do the most difficult task first and focus on the tests that deliver immediate business value.

Figure 1


21

 

Evaluating technology against vendor claims

In the past, vendors have asserted that their automation software allows organizations to create robust and maintainable test assets. As many organizations have discovered after making their purchases, the technology behind these products often does not stand up to the claims. Worse yet, without future-proofing technology, organizations are realizing only half of automation's potential value.

These solutions often lack abstraction capability. Without this technology, testers cannot easily abstract away things that will likely change. For example, suppose test engineers build a test case that contains an IP address to test hardware and software. Now suppose that the hardware moves or test engineers decide to use the test on a different device, in which case they will want to abstract the IP to future-proof it. With abstraction technology, they can update these items in one place and instantly propagate the changes across all test cases. Without this capability, they must change every test case or build new ones.

Organizations therefore should evaluate an automation tool's ability to create robust, maintainable tests. In the next version of the software, they must consider how many test cases the tester will have to update to make hundreds of tests work. (Usually by a product's third release, 50 percent of test cases are "broken".) With the right automation tool, they should only have to change a few files, not edit every test.

Bottom line: When evaluating automation software, organizations should add maintainability to their list of requirements. Next, they should scrutinize the underlying technologies to determine if they live up to their promises of maintainability, as illustrated in Figure 2. Regardless if an organization's requirements are in a request for proposal or proof of concept, engineers should specifically call out abstraction as an area, think through real-world situations, use models, and validate the tools to solve them.

Figure 2


22

 

Rethinking processes from a team perspective

How organizations build and leverage tests across groups also can diminish the benefits of automation. For example, engineers might take a test case and run it as a personal regression test at their desktops. Yet they might never think to transform it into a robust test case so that it can later run in a lights-out regression system.

To realize the potential of automation and maximize its efficiencies, testers and developers need to think of themselves as part of an assembly line. If test engineers build test cases that they know will need to be automated, they can save the automation team a step by building in abstraction. Though it might require a little extra effort up front, this action can save the entire organization time throughout the testing process.

With the right technology, testers can start to build and run tests in a way that makes it easier for the rest of the team to move those tests through automation, as well as maintain tests for future releases. More importantly, the test organization can build a scalable framework for communication and asset sharing.

Bottom line: Organizations need to start taking an assembly-line approach to testing and development, with individuals focused on creating efficiencies that benefit the entire team.

David Gehringer is the VP of marketing at Fanfare, based in Mountain View, California. He has more than 10 years of experience in the software industry, including his role as VP of marketing at Actional Corporation and various international marketing and product management positions at Mercury Interactive. David earned bachelor's degrees in Mechanical Engineering and Aeronautical Engineering, both from the University of California, Davis.

Fanfare
650-641-5119
[email protected]
www.fanfaresoftware.com

 

David Gehringer (Fanfare)