Quality Assurance Engineers can evolve into artificial intelligence (AI) strategists, guiding AI-driven test execution while focusing on strategic decisions. According to Victor Ionascu, rather than replacing testing roles, AI can enhance them by predicting defects, automating test maintenance, and refining risk-based testing. This human-AI collaboration is crucial for maintaining quality in increasingly complex software systems.
Victor Ionascu gave a talk about the role of artificial intelligence in quality assurance and software testing at QA Challenge Accepted.
QA professionals are increasingly turning to AI to address the growing complexities of software testing, Ionascu said. AI-driven automation can improve test coverage, reduce test cycle times, and enhance the accuracy of results, leading to faster software releases with higher quality, as he explained in the InfoQ article Exploring AI’s Role in Automating Software Testing.
Ionascu mentioned that he’s using AI tools like GitHub Copilot, Amazon CodeWhisperer, and ChatGPT. One of the key benefits, once you understand how to use AI effectively, is a noticeable improvement in efficiency, as he explained:
For example, with Copilot, instead of manually searching for whether a particular class or function exists, the AI automatically suggests relevant code snippets in real-time. This accelerates the development process and helps me focus more on refining and improving the logic behind the tests.
Tools like ChatGPT have proven to be invaluable for general research and guidance, Ionascu said. Instead of spending time searching through multiple sources, he uses it as a powerful assistant that provides quick insights and suggestions during the automation process. It helps reduce the time needed for researching complex testing scenarios or frameworks, which ultimately speeds up the development of robust test scripts, he mentioned.
While AI offers tremendous potential, Ionascu stressed that AI is not without limitations. It lacks the contextual understanding and human intuition required for tasks like exploratory testing and non-functional testing (e.g., performance and security), he mentioned.
The future of testing with AI will see QA professionals evolving into AI strategists, where AI tools will handle much of the execution and maintenance of automated tests, Ionascu said. AI will enable adaptive, self-healing tests that evolve with the application, reducing the overhead for QA teams, he added.
Ionascu expects AI to also improve in areas like predictive defect detection:
AI can analyze historical data to identify high-risk areas before they become critical issues.
In the long term, AI will not replace QA roles but will augment human capabilities, allowing teams to focus on strategic, high-value tasks like quality strategy, exploratory testing, and risk-based testing, Ionascu said. The key will be the partnership between AI and human oversight, where AI handles execution, and humans drive creativity and strategy, he concluded.
InfoQ interviewed Victor Ionascu about applying AI for software testing.
InfoQ: What are the limitations of AI in testing?
Victor Ionascu: While it excels at automating repetitive tasks, AI still struggles with contextual understanding of complex, domain-specific workflows. AI-generated tests may require manual refinement to ensure completeness and accuracy, especially for non-functional requirements like performance and security testing. And AI lacks human intuition, which is crucial for exploratory testing and discovering edge cases that are difficult to automate.
InfoQ: Can you give an example of a test case where human intuition made the difference?
Ionascu: An example of an edge case would be testing invisible or zero-width characters in passwords.
Scenario: A user enters a password that appears valid but contains zero-width spaces or non-printable Unicode characters (e.g., U+200B Zero Width Space, U+200C Zero Width Non-Joiner).
The example password input (User Perspective): P@ssw0rd (Looks normal)
The actual password (Hidden Characters): P@ssw0rd (Contains a zero-width space between P and @)
Automation using AI will miss this, because:
- Automated tests typically check for length, required characters, and structure but may not detect hidden characters.
- Most test automation frameworks treat these as valid input since they don’t visually alter the string.
- Traditional regex-based validation rules fail unless explicitly checking for invisible Unicode characters
Humans using AI can discover this in two ways:
- Human Tester Insight: Manually pasting a password copied from an external document (e.g., Google Docs, emails) can reveal login failures due to hidden characters.
- AI-Assisted Detection: AI-powered anomaly detection can compare expected login behavior with failed attempts where passwords "look correct" but fail
Testing this has a significant impact. Users may struggle with login failures without understanding why. It can also be exploited for phishing attacks (e.g., registering Password123 and tricking users into thinking it’s Password123).