James Walker
Director
Curiosity Software Ireland

James holds a PhD in data visualisation and machine learning, in the field of visual analytics, a topic that combines humans problem solving skills with the vast processing power of computers. James has given talks world-wide on the application of visual analytics, and has several articles in high impact journals. He has since applied these ground-breaking approaches to QA, as a senior software engineer at CA, focusing on Model-Based Testing and Test Data Management, and now as director of Curiosity Software Ireland. James is lead developer of several Curiosity tools, and has collaborated with numerous organizations to solve their software delivery challenges.

Presenting

AI can give your automation framework some brains. The question is: “where, and how?”

Do you remember when test execution automation was going to “fix” testing? Such was its promise, usually made by vendors rather than the practitioner’s themselves. Fast forward five years or so, and wide-spread attempts to adopt automation frameworks has sparked a reality check: test automation frameworks introduce a new set of bottlenecks and quality challenges. But don’t worry, now AI Is going to “fix” test automation. Long live AI!
There are a few problems with this for DevTest practitioners:

  1. First, commercial AI technologies are still emergent if not embryonic. They are yet to be tried-and-tested in the field and we still don’t know what “works” for testing.
  2. The language of AI has furthermore flooded the market and is often merely stamped onto existing technologies. It is therefore unclear what “AI” is in testing and how it relates exactly to previous testing technologies and techniques.
  3. Intelligent testing is often discussed in terms of promise and not practice. It is unclear how organisations can move from current tools and technologies to introducing valuable AI. We can’t just “buy a box of AI”, so how do we begin to incorporate it into DevTest toolchains?

This session aims to take the opposite approach. It seeks to begin from current test automation techniques and tools, assessing their drawbacks in order to identify the most valuable places to introduce AI. It then goes one step further, considering technologies that can immediately build on today’s test automation frameworks, remedying the perennial challenges of manual scripting and cumbersome test maintenance. This discussion will include:

  1. Application scanners and central data dictionaries that re-use existing automation code to generate new tests.
  2. Synthetic test data generation that auto-identifies data functions needed to test an element.
  3. One-to-many mapping of page objects and identifiers that allow tests to run after the system changes.
  4. “Self-healing” identifiers that remove the need for human test maintenance after every system change.
Menu