Chair of the Judging Panel

Mark Galvin
Head of UIS Systems Assurance
University of Cambridge

Judging Panel 2021

Dan Camilleri
Software Quality & Test Director

Bo Lu
Executive Director, Head of Test Engineering

Sabitha Nalli
Head of Testing
Peacock Engineering

Jastej Naul
Head of Smart Home Engineering

Simon Ward

Simon Ward
Head of Strategic Testing
The London Clinic

Rebecca Williams
Test Engineering Manager
Allianz Global Corporate & Specialty

Vam Majji
Head of  Quality Assurance

Mark MacDonald
Senior Director Test & Quality Engineering
Charles River Laboratories

Kiran Ghanta
Head of Testing & QA
Save the Children International

Michelle Christmas
Head of Testing & Quality




A truly independent awards programme

The European Software Testing Awards judges are appointed based on their extensive experience in the software testing and QA field. These seasoned professionals, all of whom currently hold senior management roles guarantee that each entry is judged fairly and accurately.

To ensure complete impartiality all entries are judged anonymously with company and/or individual names, products, or references to any identifiable solution and/or service being removed before being distributed to the judges.

This stringent process means that each winner of an award has done so based purely on merit. So regardless of company size, budget, customer base, market share, influence, vendor, academic, end user, consultant or otherwise; The European Software Testing Awards truly is an independent awards programme that recognises and rewards outstanding achievement.

Judges’ feedback

The following are comments gathered from 2020 Judging Panels after they had reviewed all entries. It may help you in deciding on what information to include in your entries.

As a whole the entries did not take into account criteria as much as the Judging Panel would have liked. Additionally, the entrants should remember that they are being judged by a panel of industry peers and so would do well to pitch the entry to the target audience. A few entries were considered “overly simplistic.”

Strong Entries:

  • Emphasised the business importance and criticality of the project, and clearly identified what was innovative about the approach being adopted.
  • Focused on security testing and gave detail on tool evaluation, plus approach to overcoming challenges. Good integration in to delivery and security testing principles.
  • Clear project goals coupled with quantified outcomes/successes; adaptability to mitigate unexpected challenges/risks and the utilisation of a wide variety of testing approaches and techniques that aligned with the complexity of their environments.
  • Very explicit in drawing out the use of Agile principles and how they were deployed and adapted throughout the project.  It is also valuable to have concrete examples of processes that worked well, and of lessons learned from mistakes.
  • Shown the biggest number of treats/characteristics one would expect from someone with good people management and communication skills:  Mentorship (not only for people within his company), Coaching, being a role model, approachable and supportive, valued-respected-listened by people, dedicated to team well-being, focus on skilling people up.
  • Evidence of overcoming challenges was clear, they have also described how they were committed to best practices and their selection of methodology to support their automation objectives along with evidences which made their application standing out from the rest.
  • Cleary researched and implemented best tech to test ALL the system, NOT just the software based backend systems.
  • Very clear evidence of the project methodology and justification of tech choices. Very clearly detailed description of understanding the Stakeholders needs, the importance of the project and the overall Goals. In addition employed a combination of technology to solve some very difficult problems. Demonstrated some very innovative use of tech to deliver a complete testing solution.
  • Explained well the approach to selecting tools, and the reasons for selecting them. The selection of the tools themselves represent best practice test approaches, and the results speak for themselves.  In addition, the approach to synthetic data creation was good. The testing has clearly been challenging, and some of the unique solutions implemented, along with the clear narrative on the reason for the choice, are very exciting.
  • Give context to metrics to fully justify their inclusion.

Weak Entries:

  • Did not describe challenges very well, talking about business, project, or architectural challenges – rather than challenges in their automation journey. Not sufficient details around how they were testing to take a view on the quality of the test scripts. Unable to justify their choices in selecting an implementation approach.
  • Did not cover all the criteria’s defined, which made it as slightly weaker applications.
  • Lacked details around the approach to testing. Diagram summarised tools used but limited detail on what challenges were encountered and how those were overcome through the use of automation.
  • Not demonstrating evidence of delivering to time, within budget or engagement with stakeholders, neither reflecting back on goals to establish ultimate project success.
  • Concentrating on the merits of the tool or method rather than the actual project deliverable is less likely to be scored highly.
  • Not justify the reasoning behind including metrics.