Do a web search on the term “mobile test automation,” and you’ll find it described as software that drives the user interface and compares actual results against expected results. Such products automate test execution, but that’s just a small part of the test process, and they do it in a way that is slow, brittle, and difficult to adapt to change.
However, you can also use these tools to improve the entire test process for mobile software. Here’s how you can reduce dead time while waiting for a build and take mobile test automation and tooling to the next level. But first you need to understand a few basics.
The potential gains from test tooling
I frequently ask people in the audience during my conference presentations how much time they spend actually doing testing, as opposed to attending meetings, responding to email, creating documentation, setting up servers, waiting for a fix or new build, or documenting bugs. The answer is often in the range of five percent, and it rarely goes higher than 30 percent.
What’s worse, that five percent spends about half of that time examining old features, with the rest of time spent on testing new features. The classic click-click-inspect style of test automation only has value for regression testing, which involves comparing old features. That means that even if test automation were perfectly accurate and didn’t cost anything to set up, it would only save about two percent of testers’ time in most cases and 15 percent at best.
To improve productivity, you need to do more than just use automated test execution as your silver bullet. You need do what Fred Brooks suggests in Mythical Man-Month: find a series of bronze bullets by streamlining the entire test process, and use tools to improve performance where they make sense. Jon Bach, quality evangelist at eBay, uses a simple measure: time testing (“T”) verses bug reporting/triage (“B”) and setup (“S”), from which he created the simple slogan “more T, less B.” Within T and B, you need to look at friction, things that slow down the testing work.
You’ll find friction throughout the entire testing process; from build, to deployment, to driving to an interesting place to do the work, to finding and reporting bugs, to helping programmers debug the issue, to finally retesting.
To address this, here are your six bronze bullets.
1. Reduce runtime
Most mobile automation approaches work like this:
- Wait for build
- Plug in phone and install the latest build
- Click ‘run’ button on the laptop
- Wait for results (coffee time!)
- Get test results
- Debug test run for errors to figure out what is broken. Then document.
- Argue with developers on how to reproduce the bugs
When you consider the reality of what happens with most test automation programs, that projected two percent to 15 percent in time saving sounds unrealistic.
For your first win, you need to get rid of all your downtime.
To do that, you need to get installs to happen automatically after every build by making it a post-deployment step of your continuous integration server. That’s easy enough to do for mobile web applications. For native applications, you’ll likely need to deploy to a simulator, either on a virtual machine or in the cloud.
Once you have the tools installed, the next post-build step is to run your automation suite, store the results on a networked drive, and email the key players to let them know that the tooling has run.
At the very least, this means you’ll get daily test results each morning, eliminating the wait. If possible, get the automated checks to run with every feature check-in; this makes failures tie directly to the change that created them, eliminating debugging time.
Here are two more bonus features: consider a tool that can create videos of the failures, reducing steps six and seven; and make a button to create the skeleton of a bug report in one click, including the video.
2. Reduce the regression rate
Imagine that every 100 fixes or new features introduce 50 new bugs that need fixing. That means 50 percent more work, right?
Not so. Those 50 fixes make 25 more bugs, each of which makes 12 or 13 more. By the end of this limits problem, your 100 changes have introduced 200 bugs. But if you get that regression rate to 25 percent, it’s just 133, more than a one-third reduction in the number of changes, just by reducing the bug injection rate.
There are plenty of ways to reduce the regression rate, but unit and service test automation are a great way to start. Unit tests, written by programmers, demonstrate that the software can do what the programmer expects, which can be different than what the customers expect. Service tests typically check external boundaries, such as web services, and act as a contract of inputs and expected results. They’re easy to write, can run quickly, and aren’t as brittle as end-to-end user interface checks.
If you’re having problems with the regression rate end-to-end, you can use user interface tooling to catch the bugs late and force new builds. But unit and service test automation can prevent the new builds, so ask to see the programmer’s unit tests to understand what they covered in their work. If the answer is “What unit tests?” then you’ve got an opportunity for improvement.
3. Reduce test friction with tools
Let’s say that you, the tester, have found a bug with automation that needs to be documented. You have to first reproduce it on the device, then go to the laptop, log into the issue tracker, create a new issue (update a story), and type in what you remember. A few years ago, I was in this position. When I wanted a picture of what went wrong, I took a screen capture, emailed it to myself, logged into email, saved the file to my desktop, and then attached it to the issue software.
Today there are tools that run on the device to create bugs. You might even be able to take a USB-connected phone and, from a key punch combination, create a bug, attach a screen shot, and open it on your computer. That might give back only five minutes of time savings, but that adds up if it happens five times a day, for five testers each day.
4. Cover more devices
Sometimes test tools can click on images that the human can’t see. For example, a button might be hidden by an overlap, but the software will click the ID, even though a human can’t see the button. At other times the rendering may just look wrong, and that’s something that software is extremely bad at finding. This is where videos of an overnight test run come in handy. Play the video, but accelerate the playback to four to 16x speed, and rotate the videos by device. Before, checking with more than two devices was too time-consuming. This method can provide coverage of a dozen devices over a lunch hour.
5. Consider how deep to automate
Most test automation efforts try to automate everything. In doing so, they end up automating code that never breaks, yet changes often. This creates a high-maintenance effort and makes rerunning slow. Many test automation efforts start out powerfully, only to collapse in upon themselves within two years.
This xkcd web comic points out the impact of naive automation in a way that rings true. Bottom line: if the team has reduced the regression rate, you might want to have just enough GUI checks to see if anything large broke, and then add human testing for emergent risks.
While you’re considering how deep to automate, think about defining those examples up front. Your programmers can run through the scenarios by hand, improving the first-time quality rate, while also making sure that the elements have the IDs in the format you need to create the automation. That’s one aspect of designing for testability.
6. Automate the setup of data
Sometimes the preconditions to a test—creating users, faking history, and so on—can take more work than the test itself. At one recent client site, a tester was spending 90 percent of his “test” time just using the application to set up tests. The end-to-end setup does need to be checked, but not every time. Testing tools that drive the user interface to do this are a prescription for more waiting for test results, more coffee time, and more maintenance effort when something breaks. If you find yourself with a large amount of setup code, consider a database export/import feature that creates a text file by user, account, or group. This speeds test automation, speeds manual setup for retesting, and might even create a feature for support, development, and other business units.
Once you recognize the opportunities to automate in your organization, the next question is where to get started. The answer comes from an obscure Italian engineer and economist named Vilfredo Pareto.
Pareto observed that 20 percent of Italian landowners controlled 80 percent of the land. Pareto’s essential idea, that a small group can have a large impact, eventually became bigger than the man himself. Today it’s known as the Pareto Principle. The idea is to collect all of your automation ideas, including how much time you spend on each task per week, and then automate the task that takes the largest percentage of your time. And, of course, recognize that human assessments of how long automation will take to create and maintain are often woefully inadequate.
Next, sort the opportunities for improvement by value, and look at what might be possible over the next year. Consider how much faster you could move by investing 10 to 20 percent of your time in creating the tools and infrastructure needed to improve the test process, and the side effects that some of those features might have on the software itself.
Instead of a single button you click to test and be done with it, you’ll come up with a series of things that streamline the process. You’ll be able to take that limited time available for actual testing and cover two to four times the ground. With strong unit and service layer tests, you’ll be able to push out a new release faster and with less risk. There’s a lesson here: small things, put together, can have an incredible impact, and all without the maintenance burden and resistance to change so common in “automate all the clicks” test tooling.