Webinar 6: Automate Smarter, Test Faster: AI-Powered Testing( OpenText - Aviator)

Webinar 6: Automate Smarter, Test Faster: AI-Powered Testing( OpenText - Aviator)

About Company/Product

Company: OpenText (formerly Micro Focus)

Primary Solutions Referenced:

Objective of the Webinar

  • The primary goal of this webinar is to explore how AI (both generative and non-generative) can radically transform software testing and DevOps processes. It covers

  • The evolution of AI from classical (machine-learning-based) methods to Generative AI.

  • Real-world scenarios illustrating how AI accelerates test creation, maintenance, execution, and performance analysis.

  • A forward-looking perspective on how AI will redefine roles, workflows, and ROI in enterprise software delivery.

Presenters

Presenter: Don Jackson, Field Chief Technologist at OpenText

Expertise: DevOps, Software Testing, AI integrations

Role: Outlined the vision for AI-driven software delivery, including testing, code creation, and risk-based coverage.

Brief Summary of the Webinar

AI and Its Branches

  • Don Jackson introduced AI as a broad field, distinguishing generative AI (e.g., large language models like ChatGPT or Gemini) from non-generative AI (e.g., computer vision, root cause analysis, predictive analytics).

Vision for AI in Testing

  • The webinar laid out a future in which AI-driven solutions generate user stories, test ideas, test automation scripts, and even operational plans automatically. Humans review, refine, and approve.

Current Practical Implementations

  • Using computer vision to enable scriptless test automation.

  • Harnessing LLM-based solutions to generate or refine test cases, code, and documentation.

  • Integrating AI for root cause analysis, risk-based test selection, and predictive analytics (e.g., for performance bottlenecks).

Ethical and Security Considerations

  • Emphasis on data privacy, intellectual property, and preventing hallucinations by employing a structured approach and strong moderation.

Features and Technical Aspects

  • ValueEdge: A holistic software delivery platform that integrates DevOps, QA, and security:

    • Traceability and data-lake approach for artifacts and metrics.

    • Orchestration of various tools, pipelines, and processes.

  • Aviator: OpenText’s brand for Generative AI:

    • Context-aware LLM usage: retrieval augmented generation (RAG) for prompt engineering.

    • Code creation, test generation, user-story creation, documentation generation.

    • Automated operational scripts for RPA, performance, or security tasks.

  • Computer Vision & Non-Generative AI

    • Image-based object detection to handle dynamic UI changes.

    • Predictive analytics to identify potential performance or security issues before test execution.

Job Specifications in the Future

Job Evolution:

  • Testers become prompt engineers or AI supervisors, focusing on test design, ethical oversight, and creative problem-solving.

  • Developers rely on AI for routine coding tasks, dedicating more time to architecture and innovation.

  • Operations staff handle AI-driven “intelligent operations” rather than manual monitoring.

Comparison with a Similar Product: GoTestPro

GoTestPro is similarly positioned as an AI-based testing platform. Possible differentiators or ways GoTestPro could compete:

Niche vs. Broad Platform:

  • If GoTestPro focuses primarily on test automation, ValueEdge covers a full DevOps scope (portfolio management, code, test, security, ops).

  • GoTestPro might differentiate by offering simpler, more specialized solutions with fewer dependencies.

Generative AI Depth:

  • Evaluate how GoTestPro’s generative approach handles edge cases, code generation, and computer vision.

  • Compare coverage and flexibility of test creation with ValueEdge’s approach.

Integration Ecosystem:

  • GoTestPro can emphasize easy integration with existing pipelines and other specialized tools.

  • ValueEdge invests in a single ecosystem for entire DevOps synergy.

Enterprise Scalability:

  • If GoTestPro offers strong scaling or domain-specific solutions (e.g., BFSI, healthcare), that could be a competitive advantage.

Security and Ethical AI:

  • Compare how each vendor addresses data privacy, hallucinations, and audit trails for regulated environments.

Additional Important Points

  • Addressing Hallucinations: Proper prompt engineering and an internal logic layer are key to preventing AI from producing incorrect or fabricated results.

  • Large-Scale Transformations: AI is predicted to impact entire organizations, from ideation to deployment, including compliance, security, and operational intelligence.

  • Focus on Real-World Execution: Multiple examples were given demonstrating how automated test generation and maintenance help teams handle changing UIs, dynamic test data, and integrated DevOps workflows.

Automate Smarter, Test Faster: AI-Powered Testing - Demo Screenshots

opentext-1-10-20250314-104505.png
opentext-2-10-20250314-104601.png
opentext-4-10-20250314-104917.png
opentext-5-10-20250314-105055.png
opentext-7-10-20250314-110300.png
opentext-9-10-20250314-110701.png
opentext-10-10-20250314-111246.png

 

Question and Answer:

How can AI accelerate test automation and improve efficiency?

  • Answer: AI automates test case generation, script creation (e.g., Playwright), and maintenance, reducing manual effort by up to 55%. It also identifies gaps in test coverage and optimizes test execution, enabling faster delivery cycles (e.g., reducing 6-12 months to 6-12 days).

2. What are the risks of using generative AI in test automation?

  • Answer: Top risks include:

    1. Inaccuracy/hallucinations (AI generating incorrect test scripts).

    2. Cybersecurity vulnerabilities (e.g., insecure code).

    3. Intellectual property leakage (data privacy concerns).

    4. Regulatory non-compliance (e.g., in highly regulated industries).

3. How does AI handle dynamic UI elements (e.g., data grids, charts)?

  • Answer: AI uses computer vision to identify elements visually (e.g., icons, text) rather than relying on DOM properties, making tests resilient to UI changes. For dynamic components (e.g., sorting data grids), AI can parameterize locators or use visual cues.

4. Can AI replace manual testers or automation engineers?

  • Answer: No. AI augments roles by handling repetitive tasks (e.g., test generation, maintenance), while humans focus on strategic oversight, creativity, and reviewing AI outputs for accuracy.

5. How accurate are AI-generated test scripts?

  • Answer: ~80% overlap with human-created tests in beta studies. AI may miss 10% of edge cases but also identifies 10% of scenarios humans overlook. Human review is critical for validation.

6. What inputs does AI need to generate tests?

  • Answer: Feature specifications, requirements (PDFs, JIRA tickets), and contextual documents (e.g., user manuals). Rich inputs improve accuracy.

7. How does AI ensure test data coverage?

  • Answer: AI analyzes data definitions and application screens to generate varied test data patterns, covering edge cases and input scenarios automatically.

8. Can AI update tests for UI changes dynamically?

  • Answer: Yes. AI regenerates scripts for major changes (e.g., UI revamps) and uses self-healing (computer vision) for minor locator updates.

9. What AI models power these tools (e.g., ChatGPT, custom LLMs)?

  • Answer: Tools may use GPT-4, Gemini, or custom LLMs. Enterprises can integrate internal models for data privacy.

10. How does AI address security and privacy concerns?

  • Answer: By using stateless LLM interactions, on-prem deployments, and strict data separation. AI also avoids generating insecure code (e.g., SQL injection vulnerabilities).

11. Does AI support API or mobile testing?

  • Answer:

    • API: Possible (e.g., via Playwright), but not a primary focus yet.

    • Mobile: Planned integration with no-code mobile testing platforms.

12. What skills should QA engineers learn for the AI era?

  • Answer:

    • Prompt engineering to refine AI outputs.

    • Understanding hallucinations and LLM limitations.

    • Model-based testing and computer vision concepts.

    • Strategic oversight (reviewing AI-generated tests).

13. How does AI handle ambiguous requirements?

  • Answer: Future updates will flag missing/ambiguous requirements by cross-referencing generated tests with input docs (e.g., Figma mockups).

14. What’s the future of AI in test automation?

  • Answer: Autonomous testing, AI-driven test maintenance, and integration with DevOps for end-to-end quality gates. Human roles will shift to oversight and creativity.