AI Competitive Capabilities

This is information from Gartner’s analysis:

Market Guide for AI-Augmented Software-Testing Tools

13 February 2024- ID G00783848- 33 min read

By Joachim Herschmann, Thomas Murphy, Jim Scheibmeir, Frank O'Connor, Deacon D.K Wan

Product Testing Capabilities

Vendor

Deployment model

User roles addressed

Key artifacts created

ACCELQ

Public cloud, private cloud, on-premises

  • SDET

  • Test automation engineers

  • Manual tester

  • Acceptance criteria

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Business process models

  • Test data

  • Reports

  • Inbuilt version control and branching without the need for external source code management tools, providing a collaborative workspace on cloud

Applitools

Public cloud, private cloud, on-premises

  • Developers

  • SDET

  • Test automation engineers

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Test data

Appvance

Private cloud, on-premises, hybrid

  • SDET

  • Test automation engineers

  • Quality/test managers

  • Test scripts for automation (both for open-source and commercial products)

  • Test data

  • Reports

  • Coverage maps

  • Trending information and issue management system tickets

aqua cloud

Public cloud, private cloud, on-premises

  • Test automation engineers

  • Quality/test managers

  • Business Analysts and SMEs

  • Manual testers

  • Acceptance criteria

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Test data

  • Reports

  • BDD

  • Improvement suggestions for existing artifacts

Avo Automation

Public cloud, private cloud, on-premises

  • Developers

  • SDET

  • Test automation engineers

  • Quality/test managers

  • Business analysts and SMEs

  • Manual testers

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Test data

  • Reports

Functionize

Public cloud, private cloud, on-premises

  • SDET

  • Test automation engineers

  • Quality/test managers

  • Test cases

  • Test data

  • Reports

Katalon

Public cloud, private cloud, on-premises

  • SDET

  • Test automation engineers

  • Quality/test managers

  • Acceptance criteria

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Business process models

  • Test data

  • Reports

  • Screenshots, video recordings, test execution logs, and defect observations

Keysight

Public cloud, private cloud, on-premises, hybrid

  • SDET

  • Test automation engineers

  • Quality/test managers

  • Acceptance criteria

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Business process models

  • Test data

  • Reports

mabl

Public cloud

  • Test automation engineers

  • Quality/test managers

  • Manual testers

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Test data

  • Reports

  • Accessibility reports, UI and API performance reports

OpenText

Public cloud, private cloud, on-premises

  • Test automation engineers

  • Quality/test managers

  • Business analysts and SMEs

  • Acceptance criteria

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Business process models

  • Test data

  • Reports

Parasoft

Private cloud, on-premises, Amazon and Microsoft Azure images

  • Developers

  • Test automation engineers

  • Quality/test managers

  • Acceptance criteria

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Test data

  • Reports

  • Compliance reports for standards-driven development (for example, functional safety and security) specific to the standard required to support audits

Quinnox

Public cloud, private cloud

  • Developers

  • SDET

  • Test automation engineers

  • Quality/test managers

  • Business analysts and SMEs

  • Manual testers

  • Test scenarios from Jira user stories

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Business process models

  • Test data

  • Reports

testRigor

Public cloud, private cloud, on-premises

  • Developer

  • SDET

  • Test automation engineers

  • Manual testers

  • Product managers

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Test data

  • Reports

Tricentis

Public cloud, private cloud, on-premises

  • Test automation engineers

  • Quality/test managers

  • Manual testers

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Test data

  • Reports

UiPath

Public cloud, private cloud, on-premises

  • Developers

  • Test automation engineers

  • Quality/test managers

  • Business analysts and SMEs

  • Test cases

  • Test scripts for automation (both for open-source and commercial products)

  • Business process models

  • Test data

  • Reports

Overview

Key Findings

  • Software engineering leaders are now prioritizing development productivity to enhance market responsiveness and build software applications more efficiently, while also aiming to maintain high quality. To meet this challenge they are increasingly turning to AI-augmented testing tools.

  • New vendors continue to enter the market riding the wave of AI hype, while established vendors are extending their offerings organically or through acquisitions. Software engineering leaders find it difficult to navigate this constantly evolving market, where many vendors offer a wide range of testing capabilities that are increasingly powered by AI.

  • Global companies often cannot find a single solution that meets all of their requirements due to the volume and diversity of applications that need testing. The number of countries they operate in and the numerous process and policy requirements add further complexity.

Recommendations

Software engineering leaders responsible for software quality and testing should:

  • Maximize the value of AI-augmented software-testing tools by identifying areas of software testing where AI will be most impactful to the organization. For example, it may prove useful for generating test cases directly from user stories.

  • Use this research to select vendors by evaluating how they can improve efficacy in each area of software testing while also addressing security and legal risks related to the use of AI.

  • Increase implementation success in complex, multinational organizations by allowing for the use of focused solutions that are optimized for solving specific testing problems, such as optimizing existing regression sets.

Strategic Planning Assumption

By 2027, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain, which is a significant increase from approximately 15% in early 2023.

Market Definition

Gartner defines AI-augmented software testing tools as enablers of continuous, self-optimizing and adaptive automated testing through the use of AI technologies. The capabilities run the gamut of the testing life cycle including test scenario and test case generation, test automation generation, test suite optimization and prioritization, test analysis and defect prediction as well as test effort estimation and decision making. These tools help software engineering teams to increase test coverage, test efficacy and robustness. They assist humans in their testing efforts and reduce the need for human intervention in the different phases of testing.

The increased complexity of modern applications and ongoing high dependency of manual testing impacts overall developer productivity, product reliability, stability and compliance as well as operational efficiency of final products. AI-augmented software testing tools help teams build confidence in the quality of their release candidates and support software engineering leaders and their teams to make informed decisions regarding product release.

Through integration with other elements of the development ecosystem, AI-augmented software testing tools can provide more efficient test coverage, reduce flaky tests and speed the defect remediation process. This helps to improve software engineering team productivity and operational efficiency, accelerate defect remediation and ensure adherence with internal and external software development standards.

AI-augmented software testing tools support multiple use cases, including, but not limited to:

  • Agile product delivery — Operationalize continuous testing of small increments

  • Continuous quality — Support “shift-left” and “shift-right” testing practices ranging from design validation to testing in production

  • Layered testing — Test key layers of an application, such as units, components, APIs, services and the user interface (UI)

  • Cloud-native application testing — Support testing of cloud-native applications across hybrid and multicloud environments

  • Mobile app testing — Build/test/deliver native mobile and mobile web applications

  • Failure prediction — Analyze past failure data to predict and prescribe remediation of latent or lingering risks

  • Regulatory compliance — Support for compliance, auditing, traceability and governance

Must-Have Capabilities

The must-have capabilities for this market include:

  • Natural-language-driven test authoring — Automatically generate a set of test cases based on a natural language description (written or verbal) of a use case or a requirement

  • Interactive AI-powered assistant — Interact with the tool by using natural language to frequently iterate over a set of test steps to refine and improve the fidelity and effectiveness of tests

Standard Capabilities

The standard capabilities for this market include:

  • Manual to automated test conversion — Generate automated tests for a range of different automation tools by analyzing manual test case descriptions already captured in office documents, test management tools or other means of documentation, or by observing real user interactions.

  • Automated UI, API and visual testing — Support for automated testing of web, native mobile and desktop applications through the UI, the API and services interfaces. Support for visual testing to highlight crucial changes to an application’s layout and/or content that break the user experience or accessibility requirements. This is achieved via automatic recognition of objects, image, text, audio and video in UIs, like humans do in order to detect quality issues.

  • Self-healing for test scripts — Automatically detect why a test case failed and recommend a fix (minimum capability) or automatically update the test case to fix the issue.

  • Test orchestration and prioritization — Orchestrate test execution through integrations across development, delivery and execution environments. Additionally, prioritize, optimize and parallelize the execution of tests based on criteria such as reliability (flakiness) of tests or code affected by changes as well which browsers and end-user devices have been updated (change impact analysis) to optimize DevOps workflows.

  • Defect prediction — Identify gaps in quality and defect targets, minimize redundancy and improve the effectiveness and efficiency of testing processes by detecting patterns in historical quality assurance (QA) data.

Optional Capabilities

The optional capabilities for this market include:

  • Model management Support for different ML models for optimized software testing including “bring-your-own model” (BYOM), out-of-the-box models or select third-party options.

  • Service virtualization Support for shift-left testing through the creation of virtual orchestrated services (not just simple mocking) instead of production services.

  • Test data generation Generate large volumes of synthetic test data that retain the structure and statistical properties (like correlations) of production data without a 1:1 relationship to the original data.

  • Dashboard An extensible and configurable web dashboard that provides teams with visibility into the overall test process, the quality of software components, interdependencies between services, and underlying environments and drill down options to view individual test results. The dashboard is customizable, enabling information curation by individuals and teams, and is extensible via plugins, webhooks and custom apps.

  • Marketplace Facilitate the exchange of skills and knowledge, enable the discovery of shared test repositories and provide a curated collection of approved tools and libraries.

Market Description

AI-augmented software-testing tools assist software engineering teams in creating, maintaining and executing a diverse set of tests. They also allow you to analyze test results and govern testing activities. Their primary purpose is to automate as much of this process as possible.

While traditional software test automation tools mainly focus on automating test execution through scripts using predefined paths, AI-augmented software-testing tools support additional use cases. Examples include:

  • Generating test scenarios from user stories or real-time user behavior

  • Iterative refinement of test artifacts through interactive assistants

  • Support for optimizing test suites

AI-Augmented software-testing solutions are found as stand-alone tools or are part of comprehensive software testing suites. Software engineering teams will benefit from options for integrating with integrated development environments (IDEs), DevOps platforms and AI services such as large language models (LLMs).

AI-augmented software-testing tools provide value through greater efficiency in the creation and maintenance of test assets and by aiding teams in optimizing test efforts, providing them with early feedback about the quality of release candidates.

AI-augmented software-testing tools offer a wide range of capabilities across different portions of the test workflow and typically focus on one of two areas. The first group of solutions primarily focuses on support for testing packaged applications from vendors such as SAP, Salesforce, Oracle, Microsoft and Workday as well as industry- or vertical-specific applications. The second group of solutions addresses the testing needs of software engineering teams that build custom applications, such as web or mobile applications (both internal- and external-facing).

This research focuses on the latter, exploring the dynamics of the market and highlighting vendors that provide innovative tools to help software engineering teams improve their testing efficacy. We group functional capabilities of AI-augmented software-testing tools under four categories (see Figure 1):

  • Reporting and analysis

  • Artifact management and integration

  • Test creation and maintenance

  • Test type support

Figure 1: AI-Augmented Software Testing Tool Capabilities

 

Market Direction

How Will the Market Evolve in the Years Ahead

The AI-augmented testing tools market is very dynamic, with a constant inflow of entrants that includes both new vendors and vendors in adjacent market spaces. Customers are demanding highly automated, increasingly touchless testing of software, while also needing to address concerns about security and data privacy — for example, regarding test data.

New investments in software development technology that ensure faster software delivery with higher quality led to faster growth for software testing solutions. The introduction of new AI technologies, particularly generative AI (GenAI) has further fueled the desire to modernize software development and testing in order to take advantage of these innovations (see Predicts 2024: Generative AI Is Reshaping Software Engineering). More than 52% of IT leaders say they expect their organization will use generative AI to build software, according to the respondents of the 2023 Gartner IT Leader Poll on Generative AI for Software Engineering.1 And the World Quality Report 2023-2024 published by Capgemini and Sogeti found that 77% of organizations today consistently invest in AI and utilize it to optimize quality assurance processes.2

These trends will become more significant as enterprises accelerate their digital business transformation and AI continues to permeate every aspect of IT. Gartner forecasts that, in 2024, spending on test tools will grow by 6.7% to reach $3 billion in constant U.S. dollars. By 2027, the market is expected to reach $3.6 billion in constant U.S. dollars, growing at a compound annual growth rate of 7% between 2022 and 2027 (see Forecast: Application Development Software, Worldwide, 2021-2027).

What Is Driving the Adoption of AI-Augmented Testing Tools

Additional factors will continue to drive the rapid adoption of AI-augmented software-testing tools in the next two to three years. Such factors include:

  • Cognitive overload as product teams struggle to deal with the increasing complexity of applications. Complex architectures increasingly require an understanding of an array of elements including cloud-native architecture, replacement of technologies, use of microservices, support for multiple frontends and AI-powered services.

  • Increased adoption of agile and DevOps, which results in a faster development and delivery cadence, but also comes with additional responsibilities.

  • An existing and ever-increasing backlog of work to replace manual tests with automated tests that support continuous delivery of software.

  • A shortage of skilled test automation engineers to close this automation backlogs.

  • The need to reduce test operation and maintenance costs associated with traditional tools and open-source system (OSS) solutions.

  • The need to improve the user experience of testing tools, so testers can be more productive and avoid mistakes.

  • The need to meet compliance regulations such as GDPR for data privacy and WCAG 2.1 Level AA for accessibility.

AI-augmentation is an important step in the evolution of software testing and can help to reduce business continuity risks when critical applications and services are severely compromised or stop working altogether (see Improve Software Quality by Building Digital Immunity). A digital immune system combines practices and technologies from AI-augmented testing, chaos engineering, observability, autoremediation, site reliability engineering (SRE) and software supply chain security. This is designed to increase the resilience of products, services and systems, and it is more relevant than ever (see Top Strategic Technology Trends for 2023: Digital Immune System).

Market Analysis

Many organizations continue to rely heavily on manual testing and aging testing technology. Those that transition from manual testing and start to invest in test automation tools often initially focus on testing through the UI layer.

Consequently, many vendors initially focused on using AI technologies to make it cheaper and faster to produce and maintain UI-driven functional tests. Examples include using AI for object recognition, visual testing or self-healing. Testing tools with self-healing are able to detect changes in the properties and automatically update with new attributes to ensure the designed test cases are functional. Self-healing is quickly becoming a commodity that end users simply expect to be part of standard product capabilities.

However, market conditions have created a need for more intelligent testing that is context-aware, data-driven and increasingly autonomous (see Figure 2).

Figure 2: The Path to Autonomous Testing

 

The release of ChatGPT in November 2022 showed the potential of large language models and triggered a race to enrich products with generative AI and interactive assistants (see Quick Answer: How Can Generative AI be Used to Improve Testing Activities?). Generative AI is revolutionizing the way people interact with software applications. In 2023, test tool vendors started building prototypes powered by generative AI to explore how it could be of use in software testing (see Using Generative AI to Improve Software Testing).

This new ability to create content without learning complex tools and systems is changing the way people think about software and interact with it (see How Generative AI Will Change User Experience). Users can now create code, documentation, test scenarios, test cases and even test automation scripts by giving an AI relatively straightforward directions or pointing to existing artifacts for input. The impact of generative AI and conversational prompt-based interfaces on user experience (UX) and user interfaces will be profound (see Figure 3).

Figure 3: User Experiences Before and After Conversational Prompt-Based Interfaces

 

Several vendors rolled out beta versions or private previews of LLM-powered, AI-infused products and tools to pilot with their most trusted customers. Overall, the ability to support use cases in software testing still varies a great deal. Each vendor typically focuses on AI-augmentation for particular scenarios while balancing product investments in other areas.

Software engineering should evaluate newer, smaller vendors for specific use cases in addition to established larger vendors with broader scope. While their ability to support global enterprises may be limited, small vendors may offer focused tools that can deliver quick ROI for your use case. Larger vendors face the challenge of continuing to innovate on top of legacy codebases that are typically focused on specific markets and scenarios. However, they typically benefit from their larger installed bases and their partner ecosystems.

Regardless of size, vendors will continue evolving their capabilities through organic development, acquisitions and third-party integrations, while remaining focused on their core value propositions. We expect vendors will focus on innovating beyond basic test automation and test management to deliver features for automated test design, automatic generation of test cases from artifacts such as requirements, and advanced test result analytics. These expanded capabilities will help to make quality engineering and testing accessible to a wider range of user personas, including those that do not have deep testing expertise.

Gartner has identified five areas where AI is most impactful in software testing (see Quick Answer: How Can AI Provide Benefits for Software Testing?). These five areas are examined in some detail below.

Common Use Cases for AI-Augmented Testing Tools

Test Planning and Prioritization

Testing is always a risk-based activity and the fact that testers cannot test everything even when using automation will usually result in trade-offs and compromises. AI can help to minimize risk by optimizing test sets, increasing test coverage, selecting and prioritizing critical tests based on contextual information, and reducing the cognitive load of the testers.

Key features include:

  • Intelligent test selection: Determine the optimal number of tests required to satisfy a desired level of risk or achieve a defined level of test coverage. This involves selecting the relevant regression test scripts for a release based on contextual information such as code changes made, bugs fixed and resource availability.

  • Test set optimization: Remove duplicate test cases by identifying redundancies and similarities in test-case inventories. This allows you to optimize execution sequencing.

AI-augmented testing tools can also feed targeted insights into DevOps platforms, value stream management platforms and observability platforms to improve the monitoring of software delivery value streams. For more information, see Market Guide for Value Stream Management Platforms and Magic Quadrant for DevOps Platforms.

Test Creation and Maintenance

Test creation and maintenance is the most-fertile ground for AI-augmentation and one of the areas where we see great innovation and competition. Most vendors have focused their efforts on these tasks and this is where generative AI is having a particularly big impact. For example, generative AI can create a set of initial artifacts such as high-level test scenarios and then interactive assistants can help quality engineers to refine them.

Key features include:

  • Test-scenario and test-case generation: Generate test scenarios, abstract tests, or test step descriptions by feeding in requirements, user stories and additional contextual information contained in documents or provided via written or oral instructions.

  • Automatic script generation: Generate scripts for test automation tools and frameworks (such as Selenium) or commercial platforms by analyzing a range of different data sources. One of the most interesting examples is the analysis of existing manual test cases and descriptions captured in Microsoft Excel or Word files. Other examples include analyzing log files, monitoring user activity, or using bots or crawlers in production that learn the applications’ paths and types of data inputs and create accompanying preproduction tests. Unit tests can be created automatically by analyzing code and additional contextual information such as existing documentation.

  • Test self-healing: Automatically update test scripts by identifying changes in the application under test, such as updates to the UI or API, changes to the workflow or changes to the configuration. This reduces the amount of manual test rework labor required.

For test maintenance, AI aids testers by building a better model of the application under test, enabling clearer and more-concise testing. If a test fails at runtime, AI-augmented tools can explore alternative ways to identify the faulty component or missing information. It can then fix the broken test with the updated information. This is often referred to as self-healing capability.

Test Data Generation

Quality engineers and testers are increasingly applying AI to generate synthetic data for development and testing (see Generative AI for Synthetic Data). Synthetic data addresses data privacy issues and enables organizations to use production-like data in lower environments for identified test cases. AI-augmented test data generation is primarily model-based. However, tools can also learn from log files — especially when developing input data for the tests being run.

Visual Testing

An application may technically function while not rendering correctly in all instances. Thus, quality engineers need the ability to rapidly perform accurate visual tests across a wide range of OS versions, browsers and devices — especially for consumer-grade applications. AI can augment visual testing by using a variety of image recognition techniques that replicate a human looking at screens and then comparing them. Unlike traditional testing tools, AI-augmented visual testing does not require testers to define specific assertions for objects on a user interface, such as information displayed in a text field. For a well-designed system, these tools can provide “free” assertions for the entire page or application. Leading visual tools can also aid with testing for compliance accessibility standards (see Market Guide for Digital Accessibility).

Test and Defect Analysis

When testing large applications and codebases, tools may find many issues in a single run. In these cases, a tester must determine what is really a bug, whether the test is “flaky” and how to reproduce the steps of the test. AI can assist in determining flaky tests and flag them for review. Another way to assist quality engineers and testers is by applying AI for static analysis and security testing. Vendors are also beginning to create tools that learn from test runs, and understand code changes and their impact. These tools can aid in assessing which tests should be changed or dropped, and which defects to focus on.

Representative Vendors in AI-Augmented Software Testing

Vendor

Product Names

HQ

ACCELQ

Automate Web, Automate API, Automate Mobile, ACCELQ Manual, ACCELQ Unified

Dallas, Texas, U.S.

Applitools

Applitools Eyes, Applitools Ultrafast Grid, Applitools Native Mobile Grid, Applitools PreFlight

San Mateo, California, U.S.

Appvance

Appvance IQ (AIQ)

Santa Clara, California, U.S.

aqua cloud

AI-powered Quality Assurance

Cologne, Germany

Avo Automation

Avo Assure

Cincinnati, Ohio, U.S.

Functionize

Functionize

San Francisco, California, U.S.

Katalon

Katalon

Atlanta, Georgia, U.S.

Keysight

Eggplant Test

Santa Rosa, California, U.S.

mabl

mabl

Boston, Massachusetts, U.S.

OpenText

UFT One, UFT Developer, UFT Digital Lab, ValueEdge Functional Test

Waterloo, Ontario, Canada

Parasoft

Parasoft Selenic, Parasoft SOAtest, Parasoft JTest

Monrovia, California, U.S.

Quinnox

Qyrus

Chicago, Illinois, U.S.

testRigor

testRigor

San Francisco, California, U.S.

Tricentis

Tricentis Test Automation, Tricentis Testim, Tricentis Tosca, Tricentis LiveCompare, Tricentis Test Management for Jira, Tricentis qTest

Austin, Texas, U.S.

UiPath

UiPath Test Suite

New York, New York, U.S.

s

Vendor Profiles

ACCELQ

ACCELQ offers AI-powered codeless test automation in the cloud, but is also available as a private cloud or on-premises deployment. ACCELQ provides a set of products that can test web, mobile, desktop and enterprise applications and APIs.

ACCELQ supports different roles including SDETs, test automation engineers and manual testers. A key focus is on enabling manual testers through a design-first approach and the use of an AI-powered recorder and natural language editors — allowing them to automate testing.

Examples of AI-augmentation include natural-language-driven, locator-free test authoring, interactive AI-powered assistants for in-sprint automation and support for converting manual tests to automated tests. ACCELQ also supports visual testing and provides self-healing of test scripts. It also provides test orchestration and prioritization, and defect prediction.

Applitools

Applitools offers Visual AI-powered UI test automation available through public cloud and private cloud deployment, and also through an on-premises option. The company provides a set of products that can test web apps, web mobile apps and native mobile apps.

Applitools supports different roles including developers, SDETs and test automation engineers. A key focus is on enabling developers through the use of Visual AI with Applitools Eyes, which can directly link into popular testing frameworks such as Selenium, Cypress, or Puppeteer.

Applitools has been a pioneer in Visual AI and the product set is built around that key feature. Visual validations also include checking for compliance or accessibility as well as verifying content in PDF files. Other examples of AI-augmentation include natural-language-driven test authoring and self-healing of test scripts.

Appvance

Appvance offers AI-powered test automation as public, private or hybrid cloud, or as an on-premises deployment. Their unified test platform, Appvance IQ (AIQ) enables testing of web, native mobile, hybrid apps and enterprise applications.

AIQ supports different roles including SDETs, test automation engineers and test managers. A key focus is on relieving testers from the task of creating automation scripts by using bots that explore possible paths through an application.

Examples of AI-augmentation include natural-language-driven test authoring, support for converting manual tests to automated tests, and the ability to generate tests from real user behavior. AIQ also supports AI-informed, trained exploratory testing and visual testing. It also provides self-healing of test scripts, test orchestration and prioritization, and defect prediction through the use of AI.

aqua cloud

aqua cloud offers AI-powered test management in the cloud, but is also available as a private cloud or as an on-premises deployment. The aqua platform supports the management of different types of artifacts including requirements, test cases and defects.

aqua cloud supports different roles including test automation engineer, test managers, business analysts and manual testers. A key focus is on augmenting testers through an AI assistant that generates test cases from requirements. Users can also interact with a chat bot to request QA suggestions to refine test cases.

Other examples of AI-augmentation include the creation of user stories and requirements based on speech input, interactive AI-powered assistants for refining test cases and support for converting manual tests to automated tests. aqua cloud also supports test orchestration and prioritization and defect prediction.

Avo Automation

Avo Automation offers capabilities for UI test automation and test data management. Avo Automation offers an AI-powered test automation solution as public, private or hybrid cloud, or as an on-premises deployment. Avo Automation enables testing of applications for web, mobile, API and enterprise applications.

Avo Automation supports different roles, from developers, SDETs, test automation engineers, test managers and business analysts to manual testers. A key focus is on visualizing end-to-end flows and the entire testing hierarchy to aid testers in designing test cases and to ease test maintenance via impact analysis capabilities.

Examples of AI-augmentation include natural-language-driven test authoring and visual testing. Avo Automation also provides self-healing of test scripts, test orchestration and prioritization, and defect prediction through the use of AI.

Functionize

Functionize offers an AI-powered test automation platform that is available across various deployment models, including cloud-based, private cloud and on-premises. The Functionize platform can test web and mobile web applications and APIs, including their performance.

Functionize supports testing teams of all technical aptitudes, benefiting an array of roles including SDETs, test automation engineers and test managers. A key focus is on augmenting testers through the automatic generation of test cases using a large GPT model. The model’s base training on extensive test datasets is supplemented by continuous learning from actual user behavior to ensure a high degree of accuracy and relevance in the generated test cases.

Functionize’s AI-driven testing capabilities also feature natural-language-driven test authoring, self-healing test cases, AI-powered interactive assistants, and efficient conversion of manual tests into automated scripts.

Katalon

Katalon offers an AI-powered testing platform with deployment models across public cloud, private cloud and on-premises. The Katalon platform can test web, mobile, desktop, and enterprise applications and APIs.

Katalon supports different roles including SDETs, test automation engineers and test managers. A key focus is on augmenting testers through an AI-powered coding companion that can generate context-based code suggestions, and the ability to provide detailed descriptions of existing code.

Other examples of AI-augmentation include natural-language-driven test authoring and support for converting manual tests to automated tests and generating tests from real user behavior. Katalon also supports visual testing and provides self-healing of test scripts, test orchestration and prioritization, and defect prediction.

Keysight

Keysight offers Eggplant Test, a model-based, AI-powered testing platform with deployment models across public cloud, private cloud and on-premises. Eggplant Test can test web, mobile, desktop and enterprise applications, as well as APIs and visually complex UIs.

Keysight Eggplant Test supports different roles including SDETs, test automation engineers and test managers. A key focus is on augmenting testers through a model-based testing approach for generating different test scenarios based on real interactions. This allows for easier testing of complex systems, workflows or user journeys across platforms.

Other examples of AI-augmentation include interactive AI-powered assistants and support for converting manual tests to automated tests. Keysight also supports visual testing and provides self-healing of test scripts, test orchestration and prioritization, and defect prediction.

mabl

mabl offers a low-code test automation solution in the public cloud with AI capabilities focused on improving test reliability and reducing the effort needed to maintain tests. The mabl platform provides support for testing web apps, mobile apps and APIs, as well as accessibility and performance testing.

mabl supports different roles including test automation engineers, test managers and manual testers. A key focus is on augmenting testers through AI-powered auto-healing capabilities, so they need to spend less time on fixing tests.

Other examples of AI-augmentation include support for visual testing and test orchestration and prioritization. mabl also provides an intelligent wait capability that incorporates historical application performance into the timing of actions within tests. A page-coverage feature uses machine learning to cluster similar application URLs and more effectively prioritize tests.

OpenText

OpenText provides a set of solutions as part of the UFT Family of products acquired from Micro Focus in 2023. The UFT products can test web, mobile, desktop, and enterprise applications and APIs. Complementary to its UFT Family solutions, ValueEdge Functional Test was launched in 2023 as an AI-powered, cloud-based solution for functional testing.

OpenText supports different roles including developers, test automation engineers, test managers and business analysts. A key focus is on augmenting testers and optimizing test coverage. It optimizes coverage using a model-based testing approach for generating different combinations of business processes to test based on current needs.

Other examples of AI-augmentation include natural-language-driven test authoring and visual testing. OpenText also provides self-healing of test scripts, test orchestration and prioritization, and defect prediction.

Parasoft

Parasoft offers AI-powered testing solutions on-premises or through a private cloud and through Amazon and Microsoft Azure images. The company provides products for testing of APIs and web applications, and for static code analysis, unit testing, security testing and service virtualization.

Parasoft supports different roles including developers, test automation engineers and test managers. A key focus is on enabling developers through AI-infused support for unit testing, API test and web UI test generation.

With a long history in API testing, Parasoft uses AI to construct a series of API tests that represent the underlying interface calls made when interacting with an application. Other examples of AI-augmentation include natural-language-driven test authoring and interactive AI-powered assistants for test development. Parasoft also provides self-healing of test scripts, and test orchestration and prioritization.

Quinnox

Quinnox offers Qyrus, an AI-powered, codeless, end-to-end automated software testing platform in the cloud. Qyrus has AI capabilities focused on reducing the effort needed to create and maintain tests. It provides support for testing web and mobile apps and APIs.

Qyrus supports different roles including developers, test automation engineer, test managers, business analysts and manual testers. A key focus is on enabling a diverse team through a great user experience for testing end-to-end digital business processes.

Examples of AI-augmentation include natural-language-driven test authoring, interactive AI-powered assistants and support for converting manual tests to automated tests. The Qyrus platform also supports visual testing, and provides self-healing of test scripts and test orchestration and prioritization.

testRigor

testRigor offers AI-powered codeless test automation in the cloud, but is also available as a private cloud or an on-premises deployment. The testRigor platform provides support for testing web, mobile and desktop applications, APIs and databases.

testRigor supports different roles including developers, SDETs, test automation engineers, manual testers and product managers. A key focus is on enabling testers by focusing on the user’s perspective instead of the implementation details of the application under test.

Examples of AI-augmentation include natural-language-driven test authoring, an interactive, generative-AI-powered assistant for translating high-level instructions into specific test steps, and support for converting manual tests to automated tests. testRigor also supports visual testing and provides self-healing of test scripts.

Tricentis

Tricentis offers a range of AI-powered products for testing. These include several products for test automation. Tricentis Tosca provides end-to-end testing of enterprise applications, Tricentis Testim covers customer-facing applications, and Tricentis Test Automation covers SaaS-based web app testing. Tricentis LiveCompare offers risk-AI-based impact analysis and change detection, Tricentis Test Management for Jira and Tricentis qTest both offer AI-powered test-case generation.

Tricentis supports different roles including test automation engineers, test managers and manual testers. As more organizations move from on-premises to cloud, a key focus is on enabling testers through a modern, AI-powered, SaaS-based testing platform.

Examples of AI-augmentation include natural-language-driven test authoring, creation of tests from requirements or descriptions using few-shot learning, and visual testing. Tricentis also supports self-healing of test scripts, test orchestration and prioritization, and defect prediction.

UiPath

Best known for its business automation platform, UiPath offers an AI-powered product suite consisting of four products: UiPath Test Manager, UiPath Studio, UiPath Orchestrator and UiPath test robots. It supports testing of applications built with any technology including web, mobile and desktop.

UiPath supports different roles including developers, test automation engineers, test managers

and business analysts. A key focus is on enabling a diverse team by providing seamless experience to plan, design, build, run and manage automated testing.

Examples of AI-augmentation include natural-language-driven test authoring, interactive AI-powered assistants for creating tests from requirements, and support for converting manual tests to automated tests. UiPath also supports self-healing of test scripts, test orchestration and prioritization, test insights, defect prediction and model management.

Market Recommendations

Software engineering leaders should:

  • Start evaluating AI-augmented testing tools now to understand the current possibilities and limitations of these products. Don’t wait for the perfect solution as this is a rapidly evolving market and new capabilities will emerge as vendors try to capture market share. Create a path from piloting to implementing AI-augmented testing tools. Build a roadmap to solve the development organization’s most pressing testing challenges.

  • Increase the value of AI-augmented testing tools by exploring additional use cases beyond core test automation scenarios, which limit automation primarily on the execution of tests. For example, look for shift-left scenarios such as generating test scenarios from requirements or from user stories and contextual information contained within the codebase (including code and documentation). Another interesting use case is improving the quality of requirements through assistant-powered interactions that suggest more detailed and specific language for both functional and nonfunctional aspects (see Quick Answer: 10 Key Nonfunctional Software Quality Characteristics).

  • Pave the way for long-term organizational success by providing shared visibility into quality and testing processes. Engage business, I&O and site reliability engineering (SRE) teams in addition to quality engineers. This maximizes the value you can derive from your AI-augmented testing tool investment. Use tool-adoption metrics, improvement in software quality and user satisfaction survey feedback to assess the usefulness of the tools.