With digital adoption being on an accelerated path than ever before, faster launch to the market and continuous delivery have become a prerequisite for competitive differentiation. While CI/CD pipeline-based software development has become the norm, QE’s role in the CI/CD-based development process is equally important. Continuous integration increases the frequency of running software builds, thereby increasing the need to run all tests and translating into an exponential increase in time and resource intensity.
Ensuring a reliable release depends mainly on the ability to test early and often to address defects as soon as they are committed to the pipeline. While there is a steadfast focus on continuous testing in a CI pipeline before any new code gets committed to the existing codebase, the effort spent on identifying the right set of tests to run can benefit from more attention. An intelligent way of accomplishing this involves prioritizing test case creation based on what changed recently in the application build while avoiding tests that have already run on validated portions of the application under test.
This article aims to outline some of the ways of accomplishing this objective by incorporating Artificial Intelligence (AI) principles.
Intelligent prioritization for continuous integration and continuous delivery with QE
This involves identifying those tests that map to the changes in the new code build. The changes are evaluated to create newer test cases with a high chance of failure since they have not been tested before. By deprioritizing those test cases that have meager failure rates on account of being used widely in earlier build stages and prioritizing newer test cases based on build changes, time and effort are involved in assuring QA gets reduced. Using model-based testing techniques to create the required tests and then applying ML-based prioritization on those tests will help make continuous testing more efficient.
Read more: Intelligent test automation in a DevOps world
Predictive test selection
It is a relatively new approach that adopts ML models to select test cases to run based on an analysis of code changes. Historic code changes and corresponding test case analytics serve as input to the ML model, which understands and incorporates the relationship between the code change characteristics and test cases. The model can then suggest the most apt set of test cases to be run corresponding to a code change, thereby leaving out unnecessary tests saving time and resources. The model is further updated constantly with test results from each run. Google has successfully used this model to reduce the size of its test suite to relevant ones.
Furthermore, organizations have adopted test data generation tools and ML models to predict the minimum set of test cases needed to achieve optimum coverage. Predictability is critical for enabling developers to ascertain a level of coverage for each new code build before it gets committed to a larger codebase.
Identify and obviate flaky tests
Flaky tests can pass and fail at various times, even in the absence of code changes. It’s hard and cumbersome to determine what causes these test failures and often leads to losing multiple run cycles to identify and remedy such tests. ML can play a crucial role in identifying patterns that translate to flaky tests. The cost benefits of such identification are essential, especially in relatively huge test suites, wherein digging to identify the root cause of flakiness can cost dearly. By effectively utilizing ML algorithms’ feedback and learning model, one can identify and address the underlying cause of flakiness, and such tests can be designated into more probable categories.
Bringing intelligence into QA automation for continuous integration and delivery
With the rapid evolution of digital systems, traditional QA automation techniques have been falling behind because of their inability to manage massive datasets. Applications concerned with customer experience, IoT, augmented/virtual reality often encounter exponentially large datasets generated in real-time and across a wide range of formats. The prerequisite of test automation systems that can make a quality difference in this landscape is extensively using data mining, analysis, and self-learning techniques. Not only do they need to utilize mammoth datasets, but they also need to transform test lifecycle automation to one that is adaptive and cognitive.
Digital transformation acts as the accelerator for faster code development with quality assured from the initial stages. Adopting AI/ML/NLP and similar innovative technologies for transforming QA as it adheres to continuous quality code releases are already underway. This is also validated by the World Quality Report 2021-22, which mentions that Smart Technologies in QA and testing are no longer in the future – they’re arriving. Confidence is high, plans are robust, and skills and toolkits are being developed. The sooner organizations adopt these techniques and practices, the faster they can change the contours of their software development release cycles.