Chair of the Judging Panel

Myron Kirk
Head of Test, Environments & DevOps
Boots

Judging Panel
Chekib Ayed

Chekib Ayed
Head of Testing Practices
Société Générale Corporate & Investment Banking

Delia Brown
Head of QA and Testing
MS Amlin

Dan Camilleri
Software Quality & Test Director
AstraZeneca

Deepak Chinnam
QA and Test Director
BT

Stephen Moss
Head of Software Quality Assurance
Dunelm

Sabitha Nalli
Senior IT Test and QA Manager
Skanska UK

Dave Parkinson
Director, Engineering & Certification Services, WWS QA
Sony Interactive Entertainment

Niranjalee Rajaratne
Head of Quality Assurance and Delivery
Third Bridge

Rouven Schreck
Head Of Quality Assurance
M&G Prudential

Al Sabet
Head of QA & Testing
Oxford University Press

Simon Strickland
UK Head of QA, Test & Release Management
Zurich Insurance Company Ltd

BECOME A JUDGE

APPLY TO BE A JUDGE

We are always delighted to hear from experienced senior professionals interested in judging. If you would like to be considered please email Kadi Diallo, Event Producer, at: kadi.diallo@31media.co.uk

A truly independent awards programme

The European Software Testing Awards judges are appointed based on their extensive experience in the software testing and QA field. These seasoned professionals, all of whom currently hold senior management roles guarantee that each entry is judged fairly and accurately.

To ensure complete impartiality all entries are judged anonymously with company and/or individual names, products, or references to any identifiable solution and/or service being removed before being distributed to the judges.

This stringent process means that each winner of an award has done so based purely on merit. So regardless of company size, budget, customer base, market share, influence, vendor, academic, end user, consultant or otherwise; The European Software Testing Awards truly is an independent awards programme that recognises and rewards outstanding achievement.

Judges’ feedback

The following are comments gathered from the 2015, 2016, 2017, and 2018 Judging Panels after they had reviewed all entries. It may help you in deciding on what information to include in your entries.

As a whole the entries did not take into account user experience as much as the Judging Panel would have liked.

Additionally, the entrants should remember that they are being judged by a panel of industry peers and so would do well to pitch the entry to the target audience. A few entries were considered “overly simplistic.”

Entries that fared better tended to:

  • give explicit evidence of project success (time/money saved, etc.)
  • show empirical evidence on what the problem was, what they changed and the what they measured to show the success
  • show willingness to adopt more modern test practices and show some initiative to research how others are improving
  • include voices of customers/clients, which was helpful in showing successful outcomes
  • include a strong introduction summarising the project (timelines/scope) and what the expected outcomes were/what the business stakeholders were looking for
  • give strong evidence of communication skills in the ‘best individual’ categories
  • give strong evidence of work/involvement outside of their main organisation in the ‘best individual’ categories
  • emphasise the role of testing and QA throughout the SDLC
  • demonstrate holistic approach to testing and upskilling of team members
  • show commercial awareness
  • not be overly perfect – there is no such thing as a perfect project – it was interesting to hear about the occasional blip in the road and how the team worked to overcome it
  • clearly discuss project challenges and how they were overcome
  • give context to metrics to fully justify their inclusion
  • not attempt to use metrics, which are broadly regarded as bogus (e.g. simply quote test case numbers) but provide a range of metrics which demonstrated success

Weaker entries tended to:

  • not focus on a project or come across too much like a sales pitch
  • not give evidence of the project’s purpose/scope/timeline/success
  • not consider the larger picture
  • not justify the reasoning behind including metrics
  • list a large number of acronyms, tools or technology (this just distracted from the original problem and the eventual outcome)
Menu