EY IT specialist debugging code

Why test intelligence is the future of testing

Test intelligence is the practice of applying machine learning, advanced analytics and data to solve challenges presented in testing.


In brief
  • Using metrics and results data to gain insights through forecasting and predictive modeling.
  • Development teams that make the best use of metrics early in the process can refine the process and reap the benefits much sooner.

In the software development life cycle (SDLC), a lot of activities take place, including development activities, test activities and infrastructure activities. Those events produce data and information quickly.

It’s relatively easy to generate metrics and reports, but that’s only half the battle. Equally important to gathering the data is analyzing the metrics to derive strategic insights and to enable stakeholders to make better-informed decisions sooner.

If there are 100 bugs or defects, that’s an easy-to-count number, but development teams and stakeholders want to know what those numbers mean. Are they good or bad? Better or worse? Ahead or behind? If you can understand what the numbers mean, you can decide if you need to extend the project timeline, hire more people, focus on improvements in one an area or even change the process.

This is why we have developed the concept of “test intelligence.” Test intelligence is the practice of taking information made available during testing (or any other part of the SDLC) and using that to discover actionable insights beyond “counts of defects” and “percentage complete.”

The concept of testing intelligence can be applied to many aspects of test planning, design and execution. This article focuses on test intelligence around reporting data, but there are additional areas where the same advanced analytics capabilities and other tools can be applied to solve other complex problems that clients face today.

Insights beyond the numbers

Test intelligence helps stakeholders understand how to interpret and use the data and enables them to deliver better-quality products faster. Development and testing teams apply a range of test intelligence tools and processes to speed up the testing process, improve coverage and meet demands for more innovation.


Rather than being a pure asset, the large amount of data produced during the testing process can result in important clues being lost in a sea of information. Test intelligence helps stakeholders understand how to interpret and use the data and enables them to deliver better-quality products faster.

Oftentimes, clients want us to translate a report with a bunch of metrics into things they care about, such as:

ey-testing-intelliegence-infographic

Test intelligence concepts can be used to give us the insights to answer these questions, and nearly unlimited possibilities. Here are two ways predictive modeling can be used:

1. Future test-execution trends

When will testing be complete? Predictive modeling can take into account the defect open rate and defect closure rate — along with team size, historical velocity, application availability, holidays, changes in scope and many other variables — to produce different possible scenarios to understand the impacts on timeline. If a team member leaves, how does that affect the timeline? What if we hire more people? What if we add more tests? What are the best case and worse case scenarios?

2. Defect trend information

Data can identify potential hot spots. For example, if one particular part or functionality has 80% of the defects, we can look into that and find out what’s going on, and then predict when the closure rate will be higher than the open rate.

This helps stakeholders understand the duration of the testing effort and when it’s going to end based on those dynamic drivers and variables. This concept can be applied to an infinite number of use cases.

Using data to ward off the storm

Just as hurricanes pick up a preexisting weather disturbance and gradually gain strength, so, too, do some testing problems. One week, the percentage is at 50. The next, it’s 60. Over time, how does that pattern continue? We call this prediction the hurricane model because it gives us a sense of how the storm is brewing and lets us map out a few different tracks (see figure 1).

By comparing the past several weeks with overall history and looking at the trends from the past four, six or eight weeks, we can start to predict when the proverbial storm will hit. Two to three weeks in, as soon as you start to have a trend and the context of how well you’re doing, you can use historical data to model different scenarios to start monitoring little changes.

After seeing the different paths in the hurricane model, one client recently noted, “Looking at these charts, we can see that if we wait until two weeks before launch, we will have to make huge changes, such as hiring five more people just to finish on time, or change the scope. If we look at the chart on the left and make tiny changes now, that outcome changes a lot, and we can get by with minimal staffing changes.”

When we begin a project, we may not have historical data from that particular client, but we can pull in historical data from similar clients. One of the models is based on a concept called time series linear regression. As long as you have some historical trend information — from previous sprints, historical data in the test management tool, collective knowledge or any number of things — we can use that as a starting point. We keep a rolling update as the sprints and releases move on. As the program continues, the predictive model becomes even more accurate.

Hurricane model predicting

Other test intelligence use cases

Test intelligence concepts can be applied to many aspects of testing — ranging from intelligent exploratory testing crawlers and predictive defect analytics to using natural language processing (NLP) for defect root cause analysis.

When attempting to get at the root cause of a problem, development teams often categorize reports through drop-down menus, with options such as “coding issue” or “data problem,” but the choices themselves can be biased. NLP can be used to study the written first-person comments about what went wrong and automate categorization without bias. NLP can also be used to speed up the triage process and to prioritize severity of problems that must be addressed.

Oftentimes, clients ask for help deciphering the defects and root causes, and what they really want to know is:

  • If we have time to spend, where should we go spend it?
  • Are there consistent issues with how we’re doing configurations?
  • Is there one part of the code within one of our business work streams that’s causing an issue?


Test intelligence: where do we go from here

Test intelligence can seem intimidating, but it’s easy to start with a few of the simple individual pieces that you can mature over time. A good place to begin is to build a data mart. Start by putting all data from any particular project in a standardized format and then apply tools such as predictive modeling and hurricane charts.

Predictive modeling, trend information and hotspot information are reusable concepts, but how they are applied can vary. Based on the project, it might be more meaningful to understand different predictions, so these tools and techniques may be applied in different ways.

Instead of being lost in data, you can apply test intelligence concepts to identify trends and respond to them quickly. Earlier access to data and earlier corrections can reduce the overall number of tests and testing times. This leads to quicker turnaround and the savings of hundreds of hours of manual efforts.

Download the PDF


Summary

Test intelligence allows users to speed up product deployment with quality assurance. The basic concepts in test intelligence can be applied to projects in many different ways, depending on your needs.

About this article

Authors

Related articles

How to get started with performance engineering

Performance engineering goes beyond performance testing to improve quality and make go-live faster.