froglogic’s Squish for management and execution of automated GUI tests

Squish is a tool developed by froglogic for the creation, development, management and execution of automated GUI tests for desktop, mobile and embedded applications.

froglogic supports a rich and varied toolkit set, including Qt, Java AWT/Swing, macOS, iOS, Android and for HTML5 web applications running in a variety of web browsers. Tests created with Squish are cross-platform, with support for Windows, Linux/Unix, macOS and embedded systems.

A major factor when selecting the right tool for your automated testing needs is its object recognition capabilities, whether it is Object-based (also called property-based) or Image-based, or better, both. Squish supports multiple object recognition methods, including Object-based, Image-based and accessibility-based.

This article will focus on the former two, exploring the advantages of each and discussing scenarios where one is the preferred choice.

Object-based and Image-based recognition

Object-based recognition captures key properties for each UI control on which an action is performed, as well as the action. Captured object properties are stored in a repository (often called an object repository or object map) using a unique name derived from the object.

Any test can use the repository, and in some tools, the repository is available across more than one test suite or collection of tests.

Future recorded tests check for the existence of the object in the repository and only add a new entry should it not exist. Image-based recognition instead captures images and actions corresponding to each step performed. In some tools, variations of the images can be captured in an effort to overcome minor graphical changes, which would otherwise cause the tool to fail to locate the object(s) of interest.

Object-based testing is the preferred choice for most use cases of UI testing tools. Advantages include:

  • Lowered effort required for test maintenance. That is, tests created with Object-based recognition methods respond and adapt better to change than their Image-based counterparts. As a result, effort and time to maintain tests long-term, in the presence of UI changes, are dramatically reduced. Longevity of tests also increases.
  • Increased stability and robustness of tests. UI controls are located by their properties, which are stored in an object repository. If the control changes in appearance or location, the test remains unchanged. Similarly, if a component receives an alternate name, the engineer needs only to update the label property associated with the component in the repository. Again, the tests remain untouched.

There are clear advantages to Image-based testing, however. These include:

  • Fully realised end user experience. That is, verification occurs through the same images seen by the end user during normal software use.
  • Lowered learning curve. For engineers without in-depth programming or scripting knowledge, Image-based tests may be easier to develop than their Object-based counterparts.
  • Access allowed to 2D/3D graphics or plots. That is, objects not otherwise recognised by Object-based methods can now be accessed using Image-based methods, an example being 3D medical imaging data.
  • Third-party controls or unsupported toolkits. For applications built with unsupported toolkits, for which object properties are not readily available, Image-based methods may be preferred. Similarly, third-party controls included in a supported toolkit may also require Image-based recognition.

The ideal tool will offer both methods, at best in a complimentary manner, for those cases where one method alone fails or is not sufficiently stable.

Case Studies

We will examine two case studies in which we will explore when and why one method is preferred to the other, or when a combined approach using both methods is most suitable.

Case Study 1: Customer Relationship Management (CRM) system

A CRM system is an often highly configurable software, with dynamic layouts and which uses business rules-based security and workflow. Depending on the modules available to the signed-in user, the order and presence of visible components may vary.

Writing a test, which navigates to a specific component, regardless of other available components, and confirms the component displayed as expected, represents a single basic test. We will apply this example to the two approaches discussed in this article.

Image-based Recognition Object-based Recognition
In this approach, images within the CRM system and associated user actions are captured as part of the test. These images are stored in a central database for future access With Object-based recognition, properties of each UI control are captured and stored within the object map. A symbolic name for each control is listed within the test script, which is associated with a real name (a definition of the object’s properties) in the object map. New entries are added to the object map if an object does not yet exist.

During initial record and playback, both approaches execute without modification or issue. Due to a recent re-branding of the CRM tool, the list of components, while still present, have taken on an ‘edged’ versus ‘rounded’ shape among other minor graphical design adjustments.

Image-based Recognition Object-based Recognition
During scheduled test playback, the Image-based Recognition tool fails to locate the component clicked in the original recording. It is necessary to capture new images for the test, accommodating the change in UI design. During scheduled test playback, the Object-based Recognition tool completes the test without issue.

Note the purpose of the test was not to validate the look and feel of the UI, but to confirm the functionality; performing a specific set of actions navigated to the expected component within the application. Had the purpose of the test been to validate the graphical design of the application, both tests would require updating the expected result to the new UI. Two very different tests.

Multiple components, approximately ten, in the CRM tool received alternate names to accommodate shifting trends in CRM terminology. Teams are prepared and aware of the change, and update the tests accordingly.

Image-based Recognition Object-based Recognition
Using the Image-based recognition tool, the team captures new images for the test, and once again, the test is running without issue. Using the Object-based recognition tool, the team opens the repository, updates the label property associated with the changed component to reflect the new naming convention. The tests remain untouched, and run again without issue.

Now imagine this single component was used throughout an entire test suite. Referenced hundreds of times.

Image-based Recognition Object-based Recognition
Using the Image-based recognition tool, locate all instances of the component used in the test suite and update each instance to point at the newly captured image. Using the Object-based recognition tool, the team opens the repository, updates the label property associated with the changed component to reflect the new naming convention. All tests remain untouched and run again without issue.

This tool is cross-platform, and tests must run on Windows, macOS and Linux/Unix.

Image-based Recognition Object-based Recognition
Using the Image-based recognition tool, assuming the tool is capable of running natively on Windows, Mac and Linux, OS-specific images are captured for each of the alternate operating systems and operating system variations. A separate set of tests must be either maintained for each OS and OS variation, or logic is incorporated into all scripts to use the OS-specific images for each step. Using the Object-based recognition tool, assuming the tool is capable of running natively on Windows, Mac and Linux or Unix, no changes are required, and tests run against each OS and OS variation without issue.

The above scenarios illustrate how dramatically efforts differ in the Object-based and Image-based approaches. While initial test creation requires roughly the same amount of effort, maintenance effort largely differs in the presence of even minor application changes.

The case study above demonstrates why Object-based recognitions may be preferred to their Image-based counterparts. But what if Object-based methods fail to satisfy a test requirement? We will explore this question in the second case study.

Case Study 2: PIC Simulator Laboratory

PICSimLab, an open source software, is a realtime emulator of development boards which supports a variety of picsim and simavr microcontrollers.

We will demonstrate creating tests by comparing, as in the previous example, Object-based and Image-based recognition methods. Below is the interface of the application, with the different UI controls highlighted:

Note the standard controls (menus, drop downs, buttons and status bars) and the larger, graphical control (the electronic circuit). One difficulty for Object-based methods might be in deciphering the different objects within the graphical control.

A first test would be to select a board and micro-controller and set the clock speed. As a verification point, we would like to verify that the status bar display is shown as “Running…”

Image-based Recognition Object-based Recognition
With Image-based recognition, images are captured for each UI object of interest, including the menu bars and dropdowns. A property verification point is added which verifies the text “Running…” in the status bar. In this approach, each UI control is selected with a mouse-click and its real name is automatically populated to the object map, while the symbolic name is automatically recorded to the test script. A property verification point is added which verifies the text “Running…” in the status bar.

During initial record of the first test, time required to create the test is slightly higher in the Image-based approach, owing to the need to capture images for each control. During playback, both tests execute as expected.

In a second test, we would like to verify that the simulator stops when the power button is toggled. We will ensure that the simulator has stopped by verifying that the display timer gives no reading.

Image-based Recognition Object-based Recognition
Similarly as before, an image for the power button is captured and associated with a mouse click. A screenshot verification of the board is taken which shows the timer does not display a reading. In this approach, Squish is able to capture the power button as a standard control. However, a property verification is unable to isolate the display timer, rendering this type of verification point impossible to utilise. Therefore, it is required to capture a screenshot of the board in order to complete the verification.

As a test of robustness, we will reposition the application on the screen and re-run the above two tests.

Image-based Recognition Object-based Recognition
Both tests execute successfully. During playback of the first test, Squish is able to identify all UI objects, and the test runs successfully. The second test, upon repositioning of the application, fails. An investigation into the steps file tells us that the identification of the power button is coordinate-based. That is, in any case where the application is repositioned, the test will fail. The solution is to re-record the test based on updated positions. This however, is not viable long-term, or even short-term.

Where do we now stand? At first, it appeared that the Object-based recognition method was preferred, in that there was a shorter start-up time in getting the first test to run. However, owing to the nonstandard control in our application, the long-term viability of our Object-based tests is fairly low.

The best solution, it seems, is to combine Image-based and Object-based recognition within a single test. As we saw in the first case study, minor aesthetic changes to the application result in large maintenance efforts in a test or collection of tests.

While in most cases Object-based recognition is the preferred choice for creating automated tests, we see here that retrieval of objects based on their screen appearance is a complementary – and sometimes necessary – approach to property-based identification methods.

Overview of Squish’s Recognition Features

The Squish GUI tester is the ideal tool in that it provides both Object-based and Image-based recognition methods in a single software. This is especially important for cases like Case Study 2, in which a single approach was not sufficient for creating tests with long-term viability. The table below contains a summary of Squish’s Image-based and Object-based recognition features:

Image-based Recognition Object-based Recognition
  • Highly configurable Image-based lookups (per-pixel tolerance, image cross-correlation, multi-scale image lookups)
  • Image groups to represent a UI object (e.g., for those components that have differing appearances – due to different rendering styles or cross-platform tests)
  • Fuzzy Image Search
  • OCR (new to Squish 6.5)
  • High-level object recognition (e.g., clicking a menu)
  • Scripts independent of screen coordinates and resolutions
  • Script-based object map
  • Dynamic object lookups

Summary

The Squish GUI Tester offers the best of both worlds: advanced Object-based recognition and flexible, but powerful Image-based recognition methods.

For cases where Object-based recognition is not fully suitable, this synergistic offering of both approaches makes writing and running effective tests straightforward and uncomplicated. In keeping with advancing our recognition technology, the upcoming Squish 6.5 will feature Optical Character Recognition (OCR), a technology which digitises onscreen text and enables users to locate textual UI controls across multiple platforms, in spite of changes in fonts, font sizes, decorations and rendering modes.

Choose froglogic’s Squish GUI Tester if you are looking for a state-of-the-art tool for detecting a wide assortment of UI controls, with capabilities for locating anything from text across platforms to 2D/3D plots and graphics to standard context menus and buttons.

 

To find out more about Squish, or to start your free trial – click here!

 

, , ,

Related Posts

Menu